Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast
    • Social media’s big tobacco moment is just a first step
    • Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes
    • Google, TikTok and Meta could be taxed by Australia to fund its newsrooms
    • MacKenzie Scott says we underestimate the impact of small acts of kindness. Science agrees
    • Trump says Iran ‘better get smart soon’ as economies deal with skyrocketing energy prices
    • A key weapon in America’s ‘Golden Dome’ defense shield is taking shape
    • How F1 is revving up its U.S. takeover at the Miami Grand Prix
    Compatriot Chronicle
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Compatriot Chronicle
    Home»Business»Inside the quiet takeover of local journalism by AI
    Business

    Inside the quiet takeover of local journalism by AI

    November 6, 20256 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The most obvious use case for generative AI in editorial operations is to write copy. When ChatGPT lit the fuse on the current AI boom, it was its ability to crank out hundreds of comprehensible words almost instantly, on virtually any topic, that captured our imaginations. Hundreds of “ChatGPT wrote this article” think pieces resulted, and college essays haven’t been the same since.

    Neither has the media. In October, a report from AI analytics firm Graphite revealed that AI is now producing more articles than humans. And it’s not all content farms cranking out AI slop: A recent study from the University of Maryland examined over 1,500 newspapers in the U.S. and found that AI-generated copy constitutes about 9% of their output, on average. Even major publications like The New York Times and The Wall Street Journal appear to be publishing a minimal number of words that originated from a machine.

    I’ll come back to that, but the big takeaway from the study is that local newspapers—often thought to be the crucial foundation of free press, and still the most trusted arm of the media—are the largest producers of AI writing. Boone Newsmedia, which operates newspapers and other publications in 91 communities in the southeast, is a heavy user of synthetic content, with 20.9% of its articles detected as being partially or entirely written with AI.

    {“blockType”:”creator-network-promo”,”data”:{“mediaUrl”:”https://images.fastcompany.com/image/upload/f_webp,q_auto,c_fit/wp-cms-2/2025/03/mediacopilot-logo-ss.png”,”headline”:”Media CoPilot”,”description”:”Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com”,”substackDomain”:”https://mediacopilot.substack.com/”,”colorTheme”:”blue”,”redirectUrl”:””}}

    Why local papers rely on AI

    Putting aside any default revulsion at AI content, this actually makes a lot of sense. Local news has been stripped down to the bone in recent years as reader attention has fragmented and advertising dollars have shrunk. A great deal of local papers have folded (more than 3,500 since 2005, according to Medill School of Journalism at Northwestern University), and those that remain have adopted other means to survive. In smaller markets, like my New Jersey town, it’s not uncommon for the community paper to republish press releases from local businesses.

    The fact is, writers cost money, and writing takes time. AI, of course, radically alters that reality: for a $20 a month ChatGPT subscription, you now have a lightning-fast robot writer, ready to tackle any subject. Many unscrupulous people treat this ability as their own room full of monkeys with typewriters, cranking out articles just to attract eyeballs—the definition of AI slop.

    But there’s a difference between slop and AI-generated copy written to inform, with the proper context, and edited by a journalist with the proper expertise. In a local news context, the use case for AI writing that’s most often cited is the lengthy school board meeting that, if covered, would take a reporter several hours of listening to transcripts, synthesizing, and contextualizing just to cover what happened. With AI, those hours compress to minutes, freeing up the reporter to write more unique and valuable stories.

    More likely, of course, is that the reporter no longer exists, and an editor or even a sole proprietor simply publishes as many pieces as they can that serve the community. And while it’s not the ideal, I don’t see what’s wrong with that from a utilitarian perspective. If the copy informs, a human has done a quality check, and the audience is engaging with it, what does it matter whether or not it came from a machine?

    AI mistakes hit different

    That said, when mistakes happen with AI content, they can undermine a publication’s integrity like nothing else. This past summer, when the Chicago Sun-Times published a list of hallucinated book titles as a summer reading list, it caused a national backlash. That’s because AI errors are in a different category—since AI lacks human judgment and experience, it makes mistakes a human never would.

    That’s the main reason using AI in copy is a risky business, but safeguards are possible. For starters, you can train editors to catch the mistakes that are unique to AI. Robust fact-checking is obvious, and using grounded tools like Google’s NotebookLM can greatly reduce the chance of hallucinations. Besides factual errors, though, AI writing has many telltale quirks (repeated sentence structures, dashes, “let’s delve . . .,” etc.). I call these “slop indicators,” and, while they’re not disastrous, their continued presence in copy is a subtle signal to readers that they should question what they’re reading. Editors should stamp them out.

    Which is not to say publications shouldn’t be transparent about the use of AI in their content. They absolutely should. In fact, I’d argue being as detailed as possible about the AI’s role at both the article level and in overall strategy is crucial in maintaining trust with an audience. Most editorial “scandals” over AI articles blew up because the copy was presented as human-written (think about Sports Illustrated‘s fake writers from two years ago). When the publication is upfront about the use of AI, such as ESPN’s write-ups of certain sports games, it’s increasingly a non-event.

    Which is why it’s confusing that some major publications seem to be publishing AI copy without disclosing its presence. The study claims that AI copy is showing up in some national outlets, including the New York Times, the Washington Post, and The Wall Street Journal. This appears to be a similar, if smaller scale, issue as the Sun-Times incident: Almost all of the instances were in opinion pieces from third parties, though it appears to be happening around 4–5% of the time.

    That suggests third parties are using AI in their writing process without telling the publication. In all likelihood, they’re not aware of the outlet’s AI policy, and their writing contracts may be ambiguous. However, it’s not like the rest of the content was totally immune from AI writing; the study revealed it to be present 0.71% of the time.

    Getting ahead of AI problems

    All of this speaks to the point about transparency: be straight with your audience and your staff about what’s allowed, and you’ll save yourself headaches later. Of course, policies are only effective with enforcement. With AI text becoming more common and more sophisticated, having effective ways of detecting and dealing with it is a key pillar of maintaining integrity.

    And dealing with it doesn’t necessarily mean forbidding it. The reality is AI text is here, growing, and not going away. The truism about AI that’s often cited—that today is the worst it will ever be—goes double for its writing ability, as that is at the core of what large language models do. Of course, you can bet there will be train wrecks over AI writing in the future, but they won’t be about who’s using AI to write. They’ll be about who’s doing it irresponsibly.

    {“blockType”:”creator-network-promo”,”data”:{“mediaUrl”:”https://images.fastcompany.com/image/upload/f_webp,q_auto,c_fit/wp-cms-2/2025/03/mediacopilot-logo-ss.png”,”headline”:”Media CoPilot”,”description”:”Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com”,”substackDomain”:”https://mediacopilot.substack.com/”,”colorTheme”:”blue”,”redirectUrl”:””}}



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    April 29, 2026

    Social media’s big tobacco moment is just a first step

    April 29, 2026

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    April 29, 2026
    Top News

    Job hiring is growing fastest for this AI skill—and it’s not coding

    By Staff WriterFebruary 11, 2026

    Layoffs are at an all-time high since 2009, and we’re also experiencing the lowest hiring…

    Prevalent Poverty Amid Robust Consumer Spending

    October 13, 2025

    Xi Questions Whether The West Will Choose Peace Or War

    September 4, 2025

    Late Virginia Guiffre’s Tell-All Memoir to Be Released in October – Epstein Victim Expected to ‘Name Names’ in Posthumous Book About Her Tragic Life | The Gateway Pundit

    August 25, 2025
    Top Trending

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    By Staff WriterApril 29, 2026

    Uber Technologies is doing everything it can to save its customers’ time,…

    Social media’s big tobacco moment is just a first step

    By Staff WriterApril 29, 2026

    Many commentators have called March’s California jury verdict, finding Meta and Google…

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    By Staff WriterApril 29, 2026

    California-based Ghirardelli Chocolate Company has voluntarily recalled 13 of its powdered beverage…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin serves as a beacon for the populist movement, which champions the interests of ordinary citizens over the agendas of the powerful and entrenched elitists. Rooted in the belief that the voices of everyday workers, families, and communities are often drowned out by powerful people and institutions, it delivers straightforward, unfiltered, compelling, relatable stories that resonate with the values of the American public.

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, inequality, government accountability and overreach, globalization, and the preservation of American cultural heritage.

    The site offers a dynamic mix of investigative journalism, opinion editorials, and viral content that amplify populist sentiments and deliver stories that echo the concerns of everyday Americans while boldly challenging mainstream narratives that serve the privileged few.

    Top Picks

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    April 29, 2026

    Social media’s big tobacco moment is just a first step

    April 29, 2026

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    April 29, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.