Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • Kroger is closing stores: See the updated list that shows shuttered locations across the country
    • The U.S. just unexpectedly lost 92,000 jobs. Here’s how that could affect Fed interest rates, gas prices, and the Iran war
    • Trump claimed Tylenol is linked to autism. Emergency room data just revealed a hard truth about the anti-painkiller crusade
    • We need to rethink our love affair with big vehicles
    • The U.S. job market is still under strain: report shows unemployment rose to 4.4% in February
    • Tech and finance layoffs: Oracle, Block, Morgan Stanley, Capital One headline brutal week for job losses
    • Grocery Outlet is closing stores, joins growing list of retail chains shuttering locations in 2026
    • Eat, drink, and be present: Restaurants and bars are starting to embrace cell phone bans
    Compatriot Chronicle
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Compatriot Chronicle
    Home»Business»Institutions are drowning in AI-generated text and they can’t keep up
    Business

    Institutions are drowning in AI-generated text and they can’t keep up

    February 11, 20267 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email

    In 2023, the science fiction literary magazine Clarkesworld stopped accepting new submissions because so many were generated by artificial intelligence. Near as the editors could tell, many submitters pasted the magazine’s detailed story guidelines into an AI and sent in the results. And they weren’t alone. Other fiction magazines have also reported a high number of AI-generated submissions.

    This is only one example of a ubiquitous trend. A legacy system relied on the difficulty of writing and cognition to limit volume. Generative AI overwhelms the system because the humans on the receiving end can’t keep up.

    This is happening everywhere. Newspapers are being inundated by AI-generated letters to the editor, as are academic journals. Lawmakers are inundated with AI-generated constituent comments. Courts around the world are flooded with AI-generated filings, particularly by people representing themselves. AI conferences are flooded with AI-generated research papers. Social media is flooded with AI posts. In music, open source software, education, investigative journalism, and hiring, it’s the same story.

    Like Clarkesworld’s initial response, some of these institutions shut down their submissions processes. Others have met the offensive of AI inputs with some defensive response, often involving a counteracting use of AI. Academic peer reviewers increasingly use AI to evaluate papers that may have been generated by AI. Social media platforms turn to AI moderators. Court systems use AI to triage and process litigation volumes supercharged by AI. Employers turn to AI tools to review candidate applications. Educators use AI not just to grade papers and administer exams, but as a feedback tool for students.

    These are all arms races: rapid, adversarial iteration to apply a common technology to opposing purposes. Many of these arms races have clearly deleterious effects. Society suffers if the courts are clogged with frivolous, AI-manufactured cases. There is also harm if the established measures of academic performance—publications and citations—accrue to those researchers most willing to fraudulently submit AI-written letters and papers rather than to those whose ideas have the most impact. The fear is that, in the end, fraudulent behavior enabled by AI will undermine systems and institutions that society relies on.

    Upsides of AI

    Yet some of these AI arms races have surprising hidden upsides, and the hope is that at least some institutions will be able to change in ways that make them stronger.

    Science seems likely to become stronger thanks to AI, yet it faces a problem when the AI makes mistakes. Consider the example of nonsensical, AI-generated phrasing filtering into scientific papers.

    A scientist using an AI to assist in writing an academic paper can be a good thing, if used carefully and with disclosure. AI is increasingly a primary tool in scientific research: for reviewing literature, programming, and for coding and analyzing data. And for many, it has become a crucial support for expression and scientific communication. Pre-AI, better-funded researchers could hire humans to help them write their academic papers. For many authors whose primary language is not English, hiring this kind of assistance has been an expensive necessity. AI provides it to everyone.

    In fiction, fraudulently submitted AI-generated works cause harm, both to the human authors now subject to increased competition and to those readers who may feel defrauded after unknowingly reading the work of a machine. But some outlets may welcome AI-assisted submissions with appropriate disclosure and under particular guidelines, and leverage AI to evaluate them against criteria like originality, fit, and quality.

    Others may refuse AI-generated work, but this will come at a cost. It’s unlikely that any human editor or technology can sustain an ability to differentiate human from machine writing. Instead, outlets that wish to exclusively publish humans will need to limit submissions to a set of authors they trust to not use AI. If these policies are transparent, readers can pick the format they prefer and read happily from either or both types of outlets.

    We also don’t see any problem if a job seeker uses AI to polish their resumes or write better cover letters: The wealthy and privileged have long had access to human assistance for those things. But it crosses the line when AIs are used to lie about identity and experience, or to cheat on job interviews.

    Similarly, a democracy requires that its citizens be able to express their opinions to their representatives, or to each other through a medium like the newspaper. The rich and powerful have long been able to hire writers to turn their ideas into persuasive prose, and AIs providing that assistance to more people is a good thing, in our view. Here, AI mistakes and bias can be harmful. Citizens may be using AI for more than just a time-saving shortcut; it may be augmenting their knowledge and capabilities, generating statements about historical, legal, or policy factors they can’t reasonably be expected to independently check.

    Today’s commercial AI text detectors are far from foolproof.

    Fraud booster

    What we don’t want is for lobbyists to use AIs in astroturf campaigns, writing multiple letters and passing them off as individual opinions. This, too, is an older problem that AIs are making worse.

    What differentiates the positive from the negative here is not any inherent aspect of the technology; it’s the power dynamic. The same technology that reduces the effort required for a citizen to share their lived experience with their legislator also enables corporate interests to misrepresent the public at scale. The former is a power-equalizing application of AI that enhances participatory democracy; the latter is a power-concentrating application that threatens it.

    In general, we believe writing and cognitive assistance, long available to the rich and powerful, should be available to everyone. +The problem comes when AIs make fraud easier. Any response needs to balance embracing that newfound democratization of access with preventing fraud.

    There’s no way to turn this technology off. Highly capable AIs are widely available and can run on a laptop. Ethical guidelines and clear professional boundaries can help—for those acting in good faith. But there won’t ever be a way to totally stop academic writers, job seekers, or citizens from using these tools, either as legitimate assistance or to commit fraud. This means more comments, more letters, more applications, more submissions.

    The problem is that whoever is on the receiving end of this AI-fueled deluge can’t deal with the increased volume. What can help is developing assistive AI tools that benefit institutions and society, while also limiting fraud. And that may mean embracing the use of AI assistance in these adversarial systems, even though the defensive AI will never achieve supremacy.

    Balancing harms with benefits

    The science fiction community has been wrestling with AI since 2023. Clarkesworld eventually reopened submissions, claiming that it has an adequate way of separating human- and AI-written stories. No one knows how long, or how well, that will continue to work.

    The arms race continues. There is no simple way to tell whether the potential benefits of AI will outweigh the harms, now or in the future. But as a society, we can influence the balance of harms it wreaks and opportunities it presents as we muddle our way through the changing technological landscape.

    Bruce Schneier is an adjunct lecturer in public policy at Harvard Kennedy School.

    Nathan Sanders is an affiliate at the Berkman Klein Center for Internet & Society at Harvard University.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Kroger is closing stores: See the updated list that shows shuttered locations across the country

    March 6, 2026

    The U.S. just unexpectedly lost 92,000 jobs. Here’s how that could affect Fed interest rates, gas prices, and the Iran war

    March 6, 2026

    Trump claimed Tylenol is linked to autism. Emergency room data just revealed a hard truth about the anti-painkiller crusade

    March 6, 2026
    Top News

    Geoffrey Hinton Says His Girlfriend Dumped Him Using ChatGPT

    By Staff WriterSeptember 9, 2025

    The Godfather of AI couldn’t escape AI during a breakup.Geoffrey Hinton, called the Godfather of…

    The next big opportunity for health and beauty brands

    December 22, 2025

    Your AI assistant might be making you worse at your job, unless it’s built right

    November 12, 2025

    Illegal Alien Convicted Child Sex Offender and Gang Member Discovered Living INSIDE a Daycare in California | The Gateway Pundit

    August 27, 2025
    Top Trending

    Kroger is closing stores: See the updated list that shows shuttered locations across the country

    By Staff WriterMarch 6, 2026

    Groceries are a little harder to come by in dozens of neighborhoods…

    The U.S. just unexpectedly lost 92,000 jobs. Here’s how that could affect Fed interest rates, gas prices, and the Iran war

    By Staff WriterMarch 6, 2026

    The latest U.S. jobs report is out and it isn’t pretty. The…

    Trump claimed Tylenol is linked to autism. Emergency room data just revealed a hard truth about the anti-painkiller crusade

    By Staff WriterMarch 6, 2026

    Last September, President Donald Trump took the stage at a White House…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin serves as a beacon for the populist movement, which champions the interests of ordinary citizens over the agendas of the powerful and entrenched elitists. Rooted in the belief that the voices of everyday workers, families, and communities are often drowned out by powerful people and institutions, it delivers straightforward, unfiltered, compelling, relatable stories that resonate with the values of the American public.

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, inequality, government accountability and overreach, globalization, and the preservation of American cultural heritage.

    The site offers a dynamic mix of investigative journalism, opinion editorials, and viral content that amplify populist sentiments and deliver stories that echo the concerns of everyday Americans while boldly challenging mainstream narratives that serve the privileged few.

    Top Picks

    Kroger is closing stores: See the updated list that shows shuttered locations across the country

    March 6, 2026

    The U.S. just unexpectedly lost 92,000 jobs. Here’s how that could affect Fed interest rates, gas prices, and the Iran war

    March 6, 2026

    Trump claimed Tylenol is linked to autism. Emergency room data just revealed a hard truth about the anti-painkiller crusade

    March 6, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.