What These New Fake Miranda Lambert, Chris Stapleton Songs Say About AI

Yes, AI is hitting the saturation point in society that we all feared, especially on places like social media. But it’s all so sloppy and ridiculous and insulting to the intelligence that it’s eating itself from the inside.

In November of 2025 when an AI artist named Breaking Rust was breaking the internet by topping the Billboard Country Digital Songs Sales chart and setting off alarm bells everywhere, the outlook appeared bleak at being able to hold back what seemed to be an impending onslaught of AI content that would catastrophically overwhelm the entire music industry in 2026, and quickly kill off the careers of countless human creators. Now here a couple of weeks into 2026, the concern for AI is certainly still elevated and critical. It’s still imperative that everyone in the industry insist on the disclosure of AI-generated content, and set up policies of how to handle it in ways that can protect human creators. But for a few important reasons, the five-alarm AI fire that seemed to be raging out of control going into 2026 has begun to feel more like a blaze that might be able to be managed, at least in the near term. But it’s not really because anyone in the music industry has exuded any sort of leadership on the matter. Major labels are partnering with AI companies left and right, giving up the copyrights of their artists to train AI models with little authorization from the artists themselves. Labels want their piece of the AI pie. Billboard and others still refuse to take firm stances on disclosing or excluding AI from charts. Even the Grammy’s President is softening his position on the technology. “It probably sounds a little crazy, like, ‘This guy doesn’t have his position together,’ but it’s really tough because I want to advocate for our human members and human creators, but I also realize that this technology is here,” Grammy President Harvey Mason Jr. said recently. The reason that the momentum behind the AI tsunami feels like it’s starting to subside is because far beyond just escaping the lab, AI tools capable of making full songs and albums in a matter of seconds in free trial modes have fallen into the hands of the worst of actors and dregs of society to so absolutely over-exploit and undercut the technology, it’s collapsing under its own weight. Yes, AI is hitting the saturation point in society that we all feared, especially on places like social media. But it’s all so sloppy and ridiculous and insulting to the intelligence that it’s eating itself from the inside, and giving the music industry and human creators at least a short-term reprieve where hopefully cooler heads and smarter minds can figure out how to move forward with this technology more equitably. As massive corporations and tech oligarchs try to sell society on the essential nature of new resource-sucking data centers in their local communities, or public investment in the AI arms race, every single day the public is growing more skeptical of the technology, more dubious of its outcomes, and less likely to interface with platforms that favor AI slop—let alone supply them with their audio entertainment—because it all feels so smarmy. Just over the last week of casually perusing the internet, Saving Country Music came across new AI-generated fake songs from Miranda Lambert and Chris Stapleton. Then going down a rabbit hole, a whole gaggle of fake Miranda Lambert songs from the same account were discovered. About a dozen fake AI Chris Stapleton songs were found, all that were around 8 minutes long. No, they haven’t gone viral. But they also haven’t spurred national outrage with fans demanding they get taken down. Note: Videos screenshot as to not encourage plays.

Where’s the outrage? It’s nowhere to be found because coming upon completely and obviously fake AI material is just now part of daily modern life. Years ago, there would be news stories about this, involved think-pieces about the existential crisis it poses to the industry, personal identity, and intellectual property. Now, users who stumble upon fake, AI-generated content don’t even feel the need to report it. Why would you? It’s just one of a million. As dystopian as it all might feel, it’s also been rendered anodyne by its prolific nature. A recent study by YouGov determined that Only 5% of Americans say they “trust AI a lot.” 68% of respondents wouldn’t let AI act without specific approval. And 77% are concerned that AI could pose a threat to humanity itself. Yes, over 3/4 of the population thinks that AI could be an existential threat to the species. And all of these numbers are up from previous benchmarks, meaning public buy-in is dramatically waning, even if people are still using Chat GPT to help with menial, everyday tasks.Meanwhile, any time a company uses AI to generate a commercial, it generates a massive backlash. Hollywood thought (or worried) human actors would become obsolete. But the public is roundly rejecting AI for narrative form entertainment. Merriam-Webster’s Word of the Year for 2025 was “slop,”meaning, “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” In the war for hearts and minds, AI is losing. In the financial realm, everyone’s talking about a potential AI tech bubble, and whether it’s ready to burst. But folks should remember that even in 2000 when the dotcom bubble burst, it didn’t mean the internet didn’t happen. It just grew at a more reasonable pace—-and in a more intuitive, natural, and a times, regulated pace too. AI has become so messy, some are predicting the end of social media since it’s been so crudded up, it insults the intelligence to even interface with it. Others are talking about the end of the internet altogether, with Gen Z unplugging more and more, preferring analog and in-person experiences, with AI exposing the phony nature of the entire thing. Physical media in music is surging, with vinyl becoming more popular than any other time since the ’80s, and CDs and even tapes increasing their sales in 2025. None of these assessments should mean we should cease the alarm bells about what AI could wrought not just the music industry and human creators, but throughout society. The AI wave is still most certainly coming, while it’s already disrupting music in significant ways. And for young adults just graduating high school and college, AI has already resulted in a recessionary economy and 10% unemployment for them. But it appears that the big, necessary check on AI’s power is not coming from governmental regulation or institutional guardrails—or from safety protocols instituted by the developers themselves. It’s coming from AI itself enacting such a wholesale enshitification of everything it touches, it’s like running an integrity-eroding exposé on itself every single day that reaches the masses. If the proprietors of AI had any smarts beyond the programmatic intelligence of building the thing, they would have kept it in the hands of the few. The earliest pioneers of AI technology scoffed at the mere idea of connecting AI to the internet as catastrophic. Now the internet is primarily where AI lives in consumer-grade interfaces to make fake songs from real artists to dupe morons. AI was supposed to cure Cancer and get us to Mars. Instead it’s just enraging your elderly parents/grandparents with fake news, and doing bad Miranda Lambert impressions. The thing about technology is it tends to only get better and better. But the thing about modern society is it’s tending to only get worse and worse. Society might not be immune to AI’s disruptive effects, but AI isn’t immune from the ills of late-stage Capitalism or the mediocrity crisis. In fact, AI is helping to feed the devolutionary trend. There will always be a certain level of gullible people who will be able to keep hucksters, fraudsters, and outright criminals afloat. But for the rest of society, they’ve already grown so accustomed to separating the wheat from the chaff, spotting AI from the start, and embracing the real from the fake, whether it’s food, or information, or music. Yes, the fear of what AI could have in store for 2026 is still real and valid. But there should still be a base level of trust in humanity to sniff out bullshit, and side step it. – – – – – – – – – – –

Trigger, knowing your stance on AI created music, you must be getting tired of writing articles about it.
Hopefully smarter folks will prevail and someone will create some safe parameters for AI.
I wonder what Chris and Miranda think of this “plagarism” of their music. Any comments from their camps?

The leaders of the big tech platforms are looking to mine rare earth minerals in Greenland to keep this AI monster growing. The music labels got out in front of this monster by partnering with AI companies like Suno – idk if it was to protect their investment in their own artist’s intellectual property or to also monetize the growing AI created music thing. Either way I see no institutional ‘pumping of the brakes’ on this AI crap.

One of the things I was trying to underscore in the article is that this practice is so pervasive, it’s probably not even worth reaching out to camps for comment. Pretty much any major star is going to have dozens, maybe hundreds of fake songs attributed to them on YouTube at any given time. And even if they don’t have any right now, they did at one point, or they will in the future. It’s like whack-a-mole. This is so demonstrably eroding good will and trust in the tech, it’s bordering existential for AI’s proponents. No doubt there’s an underbelly of people who fall for this stuff. But as time goes in, that numbers shrinks, and the amount of people renouncing anything AI increases. Really, it’s shouldn’t be artists and journalists policing this stuff. It should be Meta, Suno, Google, Grok, and these other AI companies who should not want this behavior representing their tech.

Here, pathetically illustrated by the AI doing a better job than does the “artiste” whereby the joke thus tells itself. Think of it as a better alternative than being alone with Shania Twain and there is no one else left on earth…

……87% of all dementia cases arise from having listened to an AI impersonation of Ernest Tubb ? and that if you don’t remember this, you’re already too far gone….(as was artificially confided by J Burton to James Reece twenty-years ago via one very uncharacteristic look of disgust from him concerning any reason held on behalf of saving country music

Technology keeps getting better but it also brings out the worst in human nature. I have less faith in humanity to sniff out AI because I am CONSTANTLY hearing people at my job and in public at a coffeehouse enthusiastically praise the “cool” things AI can do. I have also spent a decent amount of time reading comments on Suno-related threads (and even comments from people I know personally) about how they believe their own AI created music is valid and how they believe the opposition to it is “gatekeeping” and “jealousy and fear” and how they should should “make better music.” This isn’t one off remarks; this is a common thing I see being posted. There is a much larger segment of the population that is amenable to AI music than you are aware of. AI is directly targeting non-artistic people and giving them a sense of feeling artistic. AI is playing directly into people’s desires and egos. Where social media gave people this fake idea of having windows to look into to watch their favorite artists, AI is giving people a delusion to think they can create art themselves by having software build an entire lo-fi professional track around their shitty poem.

I also saw a guy post in a Murfreesboro TN music page looking for musicians to perform a unique setlist of rock songs. Out of the 25 he posted, 4 were AI songs. The guy took the post down after a few people commented. Some other dude posted in a Nashville musician finder page asking for models for his AI music label. Thankfully 90% of the comments were essentially telling the guy to fuck off but you sitll have that 10% of the population that can be persuaded into thinking they can get money and attention from AI music.

I think most people saw this coming a mile away that AI-created music was going to play into people’s desire for nostalgia and create fake songs from older artists. The silver lining is that people generally don’t give a shit about their friend’s AI created music and the kind of people who seek out that music are one deviation above mentally challenged. However there are many people passively letting Youtube autoplay videos and there are many AI created songs being promoted in the algorith – I’ve seen this happen at my parent’s house. I have so much to say on AI and I realize I’m rambling but I can’t help feeling blackpilled on AI because where Amazon and Ebay and the streaming services made it easier than ever to buy quality music or purchase movies that Roger Ebert rated as the greatest – we are instead in a world where Netflix has to spoon feed people the plot because most people are scrolling on their phones while passively watching streamed shows and movies.

You and I must run in different circles, because the prevailing feedback I get from people I work with or spend time with is they are sick of AI being jammed into every platform and app they use. I also run into the boosters – but nowhere at the same rate as I do the detractors. A few people I know thought it was fun to create AI Christmas cards, but beyond that don’t seem to be embracing the tech at the level the companies need us all to to get an ROI for Wall Street.

Ironically, I think the big tech companies rush to ram rod AI into their OS’ and Apps is actually creating more negative feedback/pushback than if they had just released models like Gemini or CoPilot and made them opt-in or pushed them at more advanced users.

Instead they have jammed these half-baked models into everything from Google Search (was already in decline due to SEO bait) to Outlook while said models still regularly hallucinate or just plan don’t work as promised and it is creating more vitriol at the tech.

But I agree with most of the rest of your take – it seems most of the AI garbage being generated now is being consumed more for novelty than that many people actually saying “wow – this is really good!”.

This is the reason I cited statistics and linked to the source in the article. Of course we can all find people who say they love AI. Statistically, the vast majority of Americans are very skeptical or distrusting, and growing more distrusting by the day. So I can’t speak to why Strait is constantly hearing from people saying they think AI is cool. But they are a minority, and their numbers are shrinking.

There are 15 comments here (and 5 of those are mine) and two people have already said good things about AI-created music.

I wouldn’t describe those AI Lovers as my circle. Coworkers are hard to avoid at time – even for a WFH job. My creative friends reject AI too. I work adjacent to IT at a healthcare company and all of the managers and people above me use AI to create and proofread emails and documents and take notes from meetings. I was told it’s basically expected now in those higher levels. I even ran the CEO’s Christmas message thru a few AI check websites and it was AI generated in part. I reject using anything more than traditional spell check.

“I have also spent a decent amount of time reading comments on Suno-related threads (and even comments from people I know personally) about how they believe their own AI created music is valid…. AI is directly targeting non-artistic people and giving them a sense of feeling artistic.”

Exactly, it’s giving people with no talent an opportunity to create something from content stolen from people with actual talent. Anyone who is making music using Ai (instruments, lyrics, melodies, etc.) that believes what they are creating is real content and others are just jealous, or whatever, are delusional or stupid (likely both). Unlike Trigger, I feel this is only the beginning, and what I have continually referred to as Ai Armageddon is still coming.

I have a friend, a tech genius and engineer who believes Ai will eventually get so good it will write the books people read, the movies they watch, the music they listen to; news articles, and on and on. He believes most people won’t be able to tell the difference.

My friend also said, and I believe Trigger touched on this in one of his other articles (or perhaps it was in the comments), that live music (real live music), a guy or a girl with an acoustic guitar or a piano will be a big draw for those who are starved for authenticity. Real will always be there.

No matter what happens, the technology isn’t going away. I’m not even sure at this point it can be regulated or contained. All we can do is enjoy each day and make sure to keep copies of our favorite albums, books, movies, etc.

“Unlike Trigger, I feel this is only the beginning, and what I have continually referred to as Ai Armageddon is still coming.”

Oh, I definitely still feel this way, and tried to underscore and emphasize this in the article. All I am saying is that the adoption of AI I believe is going to be slower than I initially expected due to the low quality and high volume AI content flooding people’s feeds unchecked, creating a lot of negative sentiments on the tech. I really thought by Q2 2026, we would see the utter implosion of the music industry. Now I think it’s going to take longer, even though the deleterious effects are already here.

“Oh, I definitely still feel this way, and tried to underscore and emphasize this in the article. All I am saying is that the adoption of AI I believe is going to be slower than I initially expected…”

Hopefully society as a whole can set restrictions on AI encroachment because it’s inherently anti-human. Fundamentally AI does not serve itself and it removes the need for individual human thought and contribution. It’s growth is just a replacement of people. AI doesn’t have “taste” when it comes to art. It doesn’t understand human psychology and philosophy and has zero skin in the game in human matters. If you imagine these AI tech advancements to their logical conclusion it just removes the need for people to even exist.

As a kid of the 80’s, I got my share of electronic music forced upon me, and it still sucks whenever I hear those boxed voices and loud synths. Today, with the autotune etc, it sounds just as bad. Lifeless, too polished and lame. Not to mention the lyrics. It’s been a downward spiral, and within 5 years or so, the “real” music will be integrated with AI, on stage and in the studio.

To quote Steve Goodman, from the song Banana Republics; Give me some words I can dance to or a melody that rhymes.

Eventually Artificial Intelligence scrapes enough garbage into its database it becomes Actual Ignorance…

What is your take on this scenario? I write songs. I make rough acoustic demos, which outline the changes, phrasing and lyrics. I’ve been using AI to produce them since I live somewhere without actual country musicians, and by being really detailed, I am producing a completely intentional result, good enough to call a high quality demo. I’m not trying to pass it off as anything other than my writing, and the intended production.

I realize you didn’t ask me but are they your raw vocals on the tracks? Producing tracks with your raw vocals and AI instrumentation is less artistic than singing to a karaoke track – because real people made those.

I record raw demos. I play the parts enough to sketch out the arrangement, I sing the vocals so the phrasing is right. Then I upload and describe the intended production. It replaces my voice (which is DEFINITELY for the better), and fleshes out the arrangements. Essentially what would happen if I could go to a session with demo players, except I can work through iteration until it sounds right.

Look I’m not afraid to be the asshole. That isn’t valid music. If a computer is generating the vocals you aren’t an artist anymore than someone playing a racing video game is a proffesional driver, or someone playing Tony Hawk Pro Skater is comparable to a skateboarder. At most you could call yourself a poet.

I wrote and arranged it. The lyrics, changes, melody, instrumentation, etc. It’s essentially a synthesizer the way I’m using it. So….

Just because technology can auto-generate vocals and music tracks that sound lo-fi and professional, that doesn’t put you in the same bracket as an aspiring artists using his own vocals and paying studio musicians to do the best they can with what you provided them. Trying to pass off AI music puts you beneath Cody Wolfe because at least he used his own voice and his own money to let the studios use pitch correction and their best efforts to cram his bad lyrics and bad lyric ideas into a track with 4/4 timing. Country music as a genre is very forgiving when it comes to poor vocalists but it has to be real vocals and real instruments otherwise it’s somewhere between playing Guitar Hero and making an EDM track.

Yes, you made it with the intention of people listening to it the same as other music correct? AI-generated vocals and music is nonsense.

This has come up before in these discussions. Using AI prompts to build out song demos I don’t think is an entirely evil practice. It makes sense to me as a quick and easy way to present a song. As long as it wasn’t released commercially, I wouldn’t necessarily have a problem with it. The problem is it’s obviously not stopping there, as evidenced by the Miranda Lambert/Chris Stapleton examples. For everyone using AI responsibly, there’s ten trying to game the system to scrape money out of it.

I’m not trying to do that. Even doing what I do, it gets about 90 percent to what I intended, which is significantly past what I can personally perform, and a hundred percent past what the local metal/punk centered music scene could deliver.

I think you are robbing yourself of most of the fun of creating music. If you aren’t a good enough player, practice until you are good enough. If you aren’t good enough and aren’t willing to practice, then the music business is not the business for you. If someone pitches me an AI demo, it goes directly in the trash, mostly because if you used AI to demo it, you probably used AI to write it.
If you are a writer and can’t convey your song well enough yourself to pitch it, you aren’t good enough to make it.

I play about an hour a day and I make my own normal recordings as well. I can do fine making pretty loose and jangly The Replacements level stuff all day. But I have not yet reached the point of A Team session pro, and that’s basically what I use AI to explore, since there simply aren’t enough country musicians where I am located. AI is a tool I can use for production iteration /mock-ups when I want to see how something might sound in a different kind of arrangement. Which gives me more ideas for playing. For example I was writing a song recently and struggling with a change in the chorus. I used AI to quickly help me iterate some options. It turned out the best solution was to do less, not more than I’d been doing. I also have a song that I wanted to see how well it worked in a.class of 89 type arrangement, which requires a level of instrumental skill I aspire to, but haven’t achieved yet. Hearing that production helped me refine a few phrasing and melodic choices in the prechorus that helped me seamlessly move from verse to chorus and nail the vibe I was after. That’s how I use it.

I have been listening to the Hank Sr., Jr., and 3 AI songs on You Tube, Most sound pretty good like Sr;s version of He Stopped Loving Her Today and Jr’s version of Carrying Your Love With Me.

Trigger, did you see where Sweden recently banned an AI-generated song from their official charts? http://www.theguardian.com/technology/2026/jan/16/partly-ai-generated-folk-pop-hit-barred-from-swedens-official-charts

Very interesting. Yes, I would say this is the correct approach. Maybe make a separate chart for AI songs, at least for now until we figure out how disruptive all of this is going to be.

I agree that the unauthorized distribution of AI generated ‘songs’ attributed to actual human artists is evil and needs to be curtailed.

I wonder though about your logic, which seems t suggest that fans will prefer AI generated “slop” to real music created by real human musicians to such an extent that it will threaten the music industry.

Generative AI will get better and better. It won’t be (and isn’t today) all “slop”. If fans prefer that over what Nashville is putting out, what does that say about Nashville? Isn’t the answer for musicians to up their own game?

The appeal to emotion logical fallacy that allowed those Randy Travis AI tracks to be accepted by the public was the trojan horse that brought in this demon. If it’s the unauthorized distribution of AI songs representing an actual person is wrong, then is it ok if the company or person who manages a dead person’s music catalog and intellectual property releases AI music of that dead artist? If your response is that it’s only ok if the artist himself agrees to it, then how can the argument be made to ban AI created music from someone who is an aspiring artist using AI to make their voice sound professional? If the argument is that AI created vocals are ok in some cases if there is a strong enough emotional case (Randy Travis losing his voice) then why wouldn’t the emotional plea for nostalgia be next to use AI to create new songs from loved artists?

“The appeal to emotion logical fallacy that allowed those Randy Travis AI tracks to be accepted by the public was the trojan horse that brought in this demon.”

We’d be dealing with this AI bullshit even if Randy Travis hadn’t walked away naked and on Ambien when he wrecked his Trans-Am. Noting even Randy has walked away from using AI? It was too polarizing. It’s even more polarizing now, and Travis is one of the few who actually had a good excuse to use it.

“I wonder though about your logic, which seems t suggest that fans will prefer AI generated “slop” to real music created by real human musicians”

I really think this depends on who is listening. Active music listeners who by albums, attend shows, read silly music websites like this one, they most certainly will prefer human creators over AI. As for the gen pop who mostly see music as background noise and distraction, they probably won’t care either way. My top line suggestion is that all music be marked as either AI, or clean of AI, just like we mark music with explicit lyrics. That way consumers who care can make informed decisions.

” If the argument is that AI created vocals are ok in some cases if there is a strong enough emotional case (Randy Travis losing his voice) then why wouldn’t the emotional plea for nostalgia be next to use AI to create new songs from loved artists?”

I’m not making this argument. A person’s likeness is part of their intellectual property. You cannot just borrow the name and visual likeness of Miranda Lambert. It’s intellectual property theft. Now, if an artist authorizes their representation to be used this way, that’s a different story. But my guess is most artists never would.

Also, I agree AI will just get better and better. But I’m of the opinion that the entire thing is such a mess right now and so wholesale eroding public trust and buy-in, this will injure AI’s adoption and tolerance throughout society in adverse ways to its proponents. They’re really duffed this thing. The polls don’t lie.

Back in the 70s, when synthesizers were gaining popularity, Queen plastered on their records “No Synthesizers” — the sounds were generated by Brian May’s guitar prowess. They dropped that after several albums, when even they started using synthesizers.

If people & platforms resist tagging music as being made with AI, purists still have the option of saying “No AI”.

But adoption is picking up. The technology enables huge productivity gains. See Bob Lefsetz’s recent blog post:

The only redeeming thing here is that Jack Tempchin’s AI songs have less than 10 likes each. I don’t understand how anyone can listen to AI music willingly. It causes me discomfort similar to staring at my phone too long.

And Queen also made their point by opening The Game (their first album with synths) with a synth wash that clearly couldn’t be any other instrument

Queen were never anti-synthesizer as an instrument, they were anti-people thinking the guitar parts Brian May had spent two days overdubbing were just a synth programmed to “multiple guitars”. A synthesizer still has to be played by a keyboard player just like any other keyboard, you can’t just grab a Mellotron and tell it “sound like Chris Stapleton”

I agree that most artists would not allow AI recreations of their voice however after they die their intellectual property is then owned by whatever designated beneficiaries. What if those beneficiaries wish to use AI to create and sell tracks of their voice? It would be legal.

This is apropos of nothing and advances the conversation zero, but the hard times news (satire) had this headline just today:
“Morgan Wallen denies A.I used for new song, titled: “Country Music, Also Known as Country and Western or Simply Country, is a genre of music.”

I’ve always maintained, AI isn’t going to leads to Skynet and Terminators, like most people envision. No, it’s going to drown us in digital garbage and waste precious resources. It’s honestly the best use for the world Enshitification, that I can think of.

It’s something Geezer Butler of Black Sabbath is doing for song demos, so actual singers can hear how he wants the vocals to sound.

I really have no issue with something like this, because he still plans to use an actual human singer on the album.

AND… what happens when the estates of Merle Haggard, Waylon Jennings – even Prince – open the vaults to the reportedly hundreds of unreleased songs by each of them? I believe this question was floated out there when Shooter Jennings put out the recent “Songbird” album of Waylon’s unreleased recordings, wondering if there was any AI involved – if these songs were really and truly Waylon…

The CGI duets of Beaucephus and Hank, Sr., and Natalie Cole and her dad, Nat King Cole, were obviously sold as manipulated images, but the recordings were real and mixed (as were the reviews). But then, the technology was a lot less sophisticated, and everybody knew the fathers had passed years ago…

Of course, there’s always been a bit of “manipulation” in the studio – setting levels, overdubbing – but these things have, at least until recently, been done “by hand” – by the engineers and artists and producers using their ears and eyes (on the board) and intuition and, yes, heart – to craft the art they hear in their musical creations… and it’s this lack of the “human element” – this intuition – that makes AI sound and feel so false, in both music and the visual arts…

Every time this issue comes up, folks cite Autotune, synthesizers, 808’s (drum machines), etc., digital recording, etc. and say, “What’s the difference?”

The difference is none of this technology allows a you to go from typing out text into an AI prompt and completing an entire song impersonating a living country singer in a matter of seconds, and have it uploaded on YouTube generating income in a matter of minutes. These other things are tools just like anything else. You still have to program the drums, play the synthesizer on a keyboard or MIDI interface, etc. AI illustrates such as dramatic leap forward, it’s treat is existential on human creators, and threatens to flood the zone with music to the point where we won’t be able to even find human created music even if we went searching for it.

I can think of several bands (that nobody here would care about) that were, say, between drummers and the main songwriter used programmed drums as a songwriting tool; but even after 95+% of the drums were replaced with a real drummer liked how certain sections sounded with the programming. I’m cool with that, I can’t imagine feeling the same way about AI contributions

Once again, Trigger, thank you for doing the hard work. Someone a few weeks ago sent me an AI George Strait/Morgan Wallen duet Gospel song. I couldn’t bring myself to listen. Last night they sent me a Christian version of an AI Jinks-esque song. I’m so torn. Sharing music is such a personal and intimate things. I want to listen and share the experience with my friend but also have major disgust at giving AI art more play.

Trying to be sure what is real or not is tiresome. Not many know what to believe. First the news, now music. I think it going to get worse and this is just the start. is buying physical music (CDs, Vinyl) the answer? I still prefer my music on CD.

“The AI wave is still most certainly coming, while it’s already disrupting music in significant ways. And for young adults just graduating high school and college, AI has already resulted in a recessionary economy and 10% unemployment for them.”

Is there evidence to back this up? I haven’t seen AI do very much that would replace workers, even young ones. I think what we’re seeing has a lot more to do with the hangover after the Zero Interest Rate Program and associated dysfunctions like email jobs, which were pretty fake long before Large Language Models went mainstream.

I think AI music is actually a pretty apt case-in-point for the whole thing: If you need *good* songs, you have to get some good humans to make them. But if you need *bad songs fast*, yeah, AI is going to win that race. But there’s only so much market for lots and lots of bad songs.

No, bad songs are the market, more often than not, ever since Al Jolson (who recorded a lot of popular stinkers).

The people developing AI like Elon Musk and Sam Altman have said themselves that AI will destroy most jobs. That’s the whole point of forwarding Universal Basic Income. Nobody is arguing that AI won’t cause catastrophic job loss, and that’s already happening at the entry level.

Maybe I’m just a teeny bit too skeptical, but I decline to take the words of billionaires trying to pump up their own hype cycles.

And let me quote the lede of the techjacksolutions article as kind of a case in point:
“Your first job probably won’t exist in ten years.

“That’s not hyperbole. Goldman Sachs projects that AI could replace the equivalent of 300 million full-time jobs globally. The catch? Most of these eliminated positions are exactly where young people start their careers.”

Well, that’s not hyperbole, Trigger, but it is a *projection*. It’s a *forecast*. Again, maybe I’m just a little too skeptical here, but I’ve learned in my few short years here on earth to take projections and forecasts with a very large grain of salt.

LLM chatbots have been consumer tech now for 3-4 years. Where are the business success stories? Show me the money.

Again, techjacksolutions:
“Companies like Salesforce and Shopify have stated explicitly they’re looking to meet growth needs with AI rather than new human hires. This represents a fundamental shift in how businesses think about scaling operations.”

“Stated.” “Looking to.” OK. Let me know when it matters enough to put in the quarterly 10K for the investors. We invested /x/ in AI. We reaped /y/ savings in hiring costs.

In the meantime, I can point to 1000 other macro and microeconomic factors that might be leading to low employment among college-educated young adults. I have young adult kids–believe me I know the employment recession is real. My own industry is in a major hiring downturn right now, but as a user of AI in my day-to-day work, I’m extremely doubtful that AI is going to replace many jobs soon, even entry-level ones.

Closer to your blog: If AI is going to destroy music in the next six months (I recognize you’re backing off of this prediction), then where are the *good* AI songs? Can any of them hold a candle to anything on your Top 25 playlist? If not, well, I think there’s good reason for concern, especially when it comes to plagiarism and fraud, but I think the wholesale destruction of the human music industry is over-egging the pudding, especially in this little corner that benefits so much from a human touch.

I’m not saying that any of these doomerist/utopian scenarios are impossible, just that we’ve been at this long enough that I’m ready to stop listening to hype and predictions, and I’m ready to start seeing results.

Yeah. I mean the whole reason for this article was to lay out my opinion that due to the sloppy nature of AI music’s rollout, I don’t believe the music industry will be in crisis and overloaded by Q2 of 2026, which was my position in November/December. I think it’s going to take much longer, and be more difficult. They’ve lost the plot with the public. That doesn’t mean they can’t get it back and AI music won’t be disruptive or even catastrophic in the future. But even if there were some great AI songs, they’d be buried beneath the slop too.

What I do think is that we need to plan and put guardrails up now to protect human creators. I don’t want to risk what might or might not happen. Assume the worst, and hope for the best.

Eh, I think it is more complex than just “college grads aren’t getting jobs due to AI”. AI is a part of that, but I don’t think it is at the level you seem to think it is.

More likely, companies over-hired during and after the pandemic and now are scaling back as the economy cools. Additionally, we would be ignorant to pretend that the Trump administrations economic policies – especially with how chaotic the roll out of the tariffs has/have been hasn’t slowed economic growth. Independent economists have pointed out that outside of the AI/Data Center bubble other sectors of the economy have had muted growth or even retrenchment as US trade policy shifts constantly.

Additionally, companies have been loathe to come out and say “the Trump admin’s trade policy sucks” because they know there is nothing to be gained by saying anything negative about the President, especially when they can spin job cuts to Wall Street as “we are investing in AI!” and try and have some of that bubble rub off on them.

I would also hesitate to take Elon Musk and Sam Altman at their word. Elon is notorious for just saying crap detached from reality. We were supposed to have robo taxi’s and full self-driving cars by the end of 2025 per Elon. Sam Altman is a grifter whose company literally depends on hyping up his LLM to get more capital injected into it.

I’m not saying AI isn’t gonna cause job losses, but I have yet to see evidence (my impartial folks) that the tech as we know it today is anywhere near capable of doing that.

Ironically, if you want a reasoned, sober take on AI and the arts I would recommend listening to Affleck and Damon on the Joe Rogan pod from Friday.

I agree with all of the micro- and macro-economic indicators that you point to as material causes of unemployment here in early 2026.

I think a major contributor to the employment downturn in my own industry (tech) was Elon firing 80% of Twitter and keeping the company running. I think that was eye-opening for CEOs across lots of industries, but especially tech, to start asking questions about who at the business was essential and who was just collecting a check. None of that had anything to do with AI no matter how much–as you point out–CEOs and companies wanted to bask in the glow of the AI hype cycle.

And speaking of Elon, robotaxis *are* real in the Phoenix area–they have been an everyday sight for years now. They just aren’t Elon’s. I think we’re pretty close to an inflection point where it becomes obvious that making/letting everybody drive their own cars is a liability. Things could change fast. And I’ll tell you, the human implications to letting the robots drive are going to be pretty profound, though I don’t know how to predict them.

I think if we want any AI regulation it will take AI directly affecting Taylor Swift or Beyonce. Especially Swift and swifties seems to have enough power to influence if not outright control some things.

If we want AI regulation we are gonna need either a new Presidential administration (and Congress) OR for Trump to kick out folks like David Sacks that are just there to get whatever Silicon Valley and Big Tech want when it comes to government contracts and regulation.

No artist – even Swift and Beyonce – have the kind of pull you are talking about when Silicon Valley sycophants like JD Vance and David Sacks are around Trump all day. And that isn’t even talking about how Dems and R’s are both bought by Big Tech in Congress.

The problem with AI music isn’t so much that the technology exists to create it (that Pandora’s box cannot be closed) but that culturally we have a segment of the population that is so passive with their listening choices that they are ok with it, and more importantly that there is a segment of the population that is completely dazzled with the bar being lowered to the floor for music creation so that literally anyone can make it and can compete with actual artists.

There is no way to reverse technological advancements but culturally the only possible solution is to shame the dunderheads that think AI-created music is valid. The people who are making AI songs and labeling it with Chris Stapleton and George Strait etc are the same internet types who will post anything and tag a bunch of famous artists to try and push their posts in the algorithm.

I agree that there is no “putting the genie back in the bottle” with the Tech, but I also think a very fair and reasonable position is that the various streaming and social platforms simply identify if a song is “AI generated” or not.

It’s a game of whack-a-mole certainly – but it is absolutely feasible for these platforms to do. They just don’t want to because the very same platforms that stream music are owned by the companies pushing AI (Google, Amazon, Apple, hell even Spotify is jamming AI into their apps). And right now the economic calculus is that it is better for their bottom line to allow the platforms to get run over by AI slop than it is to pull back on said slop in the most softest of ways.

I agree that they should identify AI created music and at least give subscribers the option to opt out of it being recommended in playlists. It’s maddening considering that streaming platforms are great for introducing listeners to new artists and now there is the fear of AI music being snuck in.

Source: savingcountrymusic.com