MBW Views is a series of op-eds from eminent music industry people… with something to say. The following MBW op/ed comes from Deviate Digital founder Sammy Andrews.
Not a day goes by without me having several conversations about AI in music.
I recently travelled to the (most excellent) Bogotá Music Market in Colombia and was fascinated to hear in person the global conversations, whilst being acutely aware of the, at times, hyperlocal potential legislation.
You can be sure that no matter where you are in the world, managers, labels, DSPs, publishers, artists, songwriters, marketers, finance teams and societies all have views, but increasingly they are rarely aligned in an impactful way.
Some see AI as a genuinely useful creative and operational tool. Others see it as a siphon on royalties and rights. Both perspectives have merit.
The challenge for the industry worldwide right now is to move beyond competing opinions and start building systems that actually work, whilst not stifling governments’ tech potential for their nations.
The first step is to abandon the false black-and-white narrative and look more carefully and considerately at how AI is actually used. When it assists in a human-led production, generating stems, cleaning up a vocal, or handling mastering, authorship remains human.
The questions then become about inputs and disclosure: Were likeness rights cleared? Was the training data lawfully sourced? Was the AI’s role declared?
Some platforms have already made this mandatory. YouTube requires creators to flag realistic synthetic content. TikTok has started embedding content credentials that travel with audio. Meta labels synthetic media across its services. Disclosure is no longer a matter of branding; it has become compliance.
Fully AI-native output is a different matter. In the US, works without human authorship are not copyrightable, which removes statutory royalties and exclusivity. Rights can only be claimed through contracts, trademarks, or platform terms.
China has recognised some AI outputs where human input is deemed creative, while simultaneously imposing binding labelling rules on developers and distributors.
The UK still clings to an outdated “computer-generated works” clause that does little to address today’s realities.
Japan and Singapore permit broad text-and-data-mining exceptions for training, but they remain unclear on how outputs are treated.
The result is a patchwork of legal regimes in which the same track may be protected in one country and fall into the public domain in another.
DSPs are responding, but painfully slowly. One of the largest has removed tens of millions of tracks over the past year for suspected fraud or manipulation. It is now preparing stricter rules on impersonation, spam filters to choke off mass duplicate uploads, and AI disclosures carried through DDEX metadata.
These steps recognise the scale of the problem, but their effectiveness will depend entirely on execution. Filters must block manipulation without penalising legitimate artists, and disclosures must travel consistently across the chain. Without that, the appearance of progress risks becoming little more than window dressing.
The more complicated and unresolved issue is what counts as “human enough.” A rapper over AI-generated beats, a band using AI for mixing, a vocal polished with generative tools; each involves different levels of machine input.
Right now, there is no universally or even nationally accepted threshold. Leaving platforms to define this independently risks a fragmented environment where rules shift from service to service. What the industry needs is a shared framework, developed with rights-holders, creators, and regulators, that can be applied consistently across societies, DSPs, and licensing.
Verification is another weak link. PRS has introduced “know your customer” checks, but most distributors have not. Without consistent onboarding standards, fraudulent actors can migrate freely between services.
Preferred-partner schemes and verification marks look credible but mean little without genuine due diligence and consequences for those that enable spam.
This weakness helps explain why Deezer now reports over 30,000 fully AI-generated tracks every single day, almost a third of all new uploads, with up to 70% of streams of those flagged tracks identified as fraudulent.
Those uploads don’t just clutter platforms, they distort royalty pools. Other services have remained quiet, and that lack of transparency is itself a problem. If streams are being siphoned, rights-holders need to know the scale in order to conduct business accordingly.
The impact is not confined to Europe or North America. In Latin America, musicians are protesting against what they describe as an AI flood, complaining that their catalogues are being buried under synthetic tracks on Spotify, Deezer, and YouTube Music.
“AI is global, but the systems for governing it are fragmented.”
Beyond royalties, they are facing impersonation and the erosion of visibility. The lesson is obvious: AI is global, but the systems for governing it are fragmented, and artists in developing markets often face the sharpest edge of the disruption.
Metadata offers some partial answers. ISRC remains process-neutral and should not, in my opinion, be split into “human” and “AI” codes, but it does not capture provenance. DDEX has attempted to address this with ERN v4.3.1, which introduces optional flags to show when a recording or contribution was made fully or partly with generative AI.
This integrates disclosure into the same supply chain that already governs release data, rights and pricing. On the content side, C2PA credentials allow provenance to be embedded in audio files, while ISCC creates fingerprints that help detect duplicates and manage fraud.
These are useful tools, but they are incomplete. The DDEX fields are optional, they don’t require disclosure of the specific model or vendor, and they leave the term “partially” undefined.
Some momentum is building. Universal Music Group and Beggars Group have committed to using these standards, and distributors are starting to follow. In May this year, SonoSuite upgraded its Spotify feeds to ERN 4.3 as part of its Preferred Provider status. In June, Revelator did the same.
Implementation is beginning but, until adoption is universal, the benefits will remain limited. Metadata has always been the industry’s weak spot; in the age of AI the cost of half-measures is far higher.
Policy responses diverge. The EU’s AI Act is now in force, with transparency obligations to be fleshed out through delegated acts later this year.
The US Copyright Office has held firm on human authorship, while lawsuits against AI developers pile up, creating settlements that provide partial guardrails but little true clarity.
The UK has floated a training exception with a rights-holder opt-out, but without mandatory dataset transparency the opt-out is effectively unenforceable.
Resistance is building. In September, the Musicians’ Union passed a motion at the Trades Union Congress demanding an AI bill with stronger copyright protections and fair remuneration for creators.
China has already imposed binding rules requiring both visible and embedded labelling of synthetic content. WIPO, despite years of consultation, has failed to deliver enforceable standards.
Meanwhile, courts across the US and Europe are still wrestling with the fundamental question: is training on copyrighted works without consent an infringement, or not?
Licensing training data remains the structural gap. One-to-one deals cannot scale. Collective licensing is the only workable model, yet most societies have hesitated and left publishers to litigate and labels to strike bilateral agreements.
Sweden’s STIM has broken ground with a collective AI licence for creators who opt in, requiring attribution technology such as Sureel to track how works influence outputs and making revenue flows auditable in real time. It may become a blueprint for others.
At the same time, the arrival of enterprise-focused models like Stability AI’s Stable Audio 2.5 shows that robust licensing frameworks are not a theoretical need, but a commercial necessity.
It is worth remembering that AI is not only a disruptor, it is also being used to fix the industry’s existing weaknesses. BMG, working with Google Cloud, has launched StreamSight, a tool designed to accelerate royalty forecasting and make payment processes more transparent.
This illustrates the double edge of AI: the same technology that threatens to swamp the system is also being deployed to modernise it.
“Before anyone rushes to legislate or rewrite contracts, the industry should ask whether it is ready to throw stones from glass houses.”
AI is now part of music creation. Sometimes it is a studio tool, sometimes a collaborator, sometimes the principal composer. The task is not to reject it but to integrate it into systems that protect attribution and value. That requires clear disclosure, stronger verification, effective fraud control and scalable licensing.
The real risk is not that AI overwhelms the industry. The real risk is that the systems underpinning recorded music remain fragmented. Standards exist but are inconsistently applied. Laws are advancing but not aligned. Platforms are taking action but are still reluctant to publish the data that would prove effectiveness.
And before anyone rushes to legislate, label, or rewrite contracts, the industry should ask whether it is ready to throw stones from glass houses. Catalogues are already riddled with inconsistent metadata, missing credits, and in some cases tracks that carry AI fingerprints no one has admitted to – and in places I suspect legislation would disproportionately impact some genres, such as electronic.
Companies like Uhmbrella are now offering the ability to audit entire catalogues, scanning recordings for AI involvement, metadata gaps, or unlabelled generative content. If labels, publishers and distributors do not first clean their own shelves, they risk building new rules on shaky ground.
Trying to impose order while ignoring what is already in the system is an invitation to misallocated royalties, hidden liabilities and unnecessary fights with artists.
Unless those gaps are confronted head-on, confidence in streaming will weaken further, and AI will continue to expose just how fragile the foundations of this industry already were.

MBUK is available as part of a MBW+ subscription – details through here.
All physical subscribers will receive a complimentary digital edition with each issue.Music Business Worldwide

