Recently, a new album appeared on Spotify under the name of UK folk singer Emily Portman – but it wasn’t hers. She hadn’t written it, hadn’t recorded it, and had no idea it even existed.

It was AI-generated, and made to look like her work. Produced, titled, and released without her knowledge, it was an AI‑generated project masquerading as Emily’s work. Emily herself has suggested it featured cover art and metadata eerily reminiscent of her signature aesthetic, and it was rapidly picked up by algorithmic playlists.

Emily’s case isn’t unique. Artists including Father John Misty, Sam Beam (aka Iron & Wine), Teddy Thompson and Jakob Dylan have reported similar issues: AI-generated or fake albums released under their names without consent. For the artists, this is a violation. In a BBC interview Portman described the “distressing” experience as feeling like “the start of something pretty dystopian” and also highlighting a lack of legal safeguards for artists.

What’s at stake?

  • Identity theft – an artist’s name and likeness hijacked.

  • Lost royalties – fake plays eating into legitimate income.

  • Audience confusion – listeners unsure what’s real.

  • Broken trust – discovery algorithms pushing work that isn’t human at all.

If this were just a handful of isolated cases, it would be troubling enough. But the scale is far greater, and growing fast.

The flood of AI uploads

According to French streaming service Deezer, its platform now absorbs over 30,000 fully AI-made tracks every single day. That’s nearly a third of all uploads. The figure has tripled since January. Fake artists and distributors pump low-effort, machine-made music into platforms to be rinsed by bots, siphoning royalties away from real musicians.

Platform responsibility

To its credit, Deezer blocks 100% AI tracks from its editorial playlists and excludes fraudulent streams from royalty pools. But not all platforms are acting with the same urgency. Amazon, for example, integrated AI music tool Suno into Alexa just weeks after pledging to “address unlawful AI-generated content.”

Meanwhile, artists like Emily Portman are left vulnerable – waking up to discover albums they never made appearing under their names.

What needs to happen

This isn’t about debating whether AI can be creative. It’s about tackling fraud and protecting human identity. The industry needs:

Cross-platform collaboration on anti-fraud detection.
Strict identity checks before music goes live.
Clear labelling of AI-generated content.
Financial penalties for distributors who flood DSPs with fraudulent AI output.

Artists can help by reporting impostor content quickly, verifying their discographies, and speaking out collectively.

A glimmer of hope

Deezer’s data shows that real listeners overwhelmingly prefer human-made music. Despite 30,000 AI uploads per day, just 0.5% of plays are of fully AI tracks. And once fraudulent streams are excluded, fewer than one in 700 listens come from an actual human choosing to stream AI music.

It’s encouraging that the audience for authentic, human expression is still there. But unless streaming platforms move faster, that audience will be drowned in a flood of synthetic uploads.

Emily Portman’s case is a wake-up call. Deezer’s numbers prove the scale of the threat. The time to act is now. Real music deserves real recognition.

(photocredit Jon Wilks)