Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • Market Talk – April 29, 2026
    • Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast
    • Social media’s big tobacco moment is just a first step
    • Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes
    • Google, TikTok and Meta could be taxed by Australia to fund its newsrooms
    • MacKenzie Scott says we underestimate the impact of small acts of kindness. Science agrees
    • Trump says Iran ‘better get smart soon’ as economies deal with skyrocketing energy prices
    • A key weapon in America’s ‘Golden Dome’ defense shield is taking shape
    Compatriot Chronicle
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Compatriot Chronicle
    Home»Business»Does the public comment system have an AI problem?
    Business

    Does the public comment system have an AI problem?

    March 19, 20269 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Last year, when an air quality agency in Southern California proposed a new rule to encourage consumers to buy heat pumps instead of gas heaters, it was flooded with 20,000 comments opposing the idea—many more than usual. “Due to the volume and nature of these submissions, South Coast AQMD had concerns about their authenticity,” says Rainbow Yeung, an agency spokesperson. The agency’s executive director got an email thanking him for his “opposition” to a rule that his own team had drafted.

    To check the validity of the comments, the South Coast Air Quality Management District reached out to a small sample of commenters—172 people—to confirm that they had actually sent the emails. Almost no one responded. But of the five people who did, three of them said that they didn’t know anything about the comments that had been submitted under their names. In a separate investigation, a campaigner from the Sierra Club also started contacting people on the list; the four people he reached also said that they hadn’t sent emails.

    The Los Angeles Times recently reported that CiviClick, a company that bills itself as a provider of “AI-powered advocacy tools,” had led the campaign to send opposition comments. The client was a public affairs consultant with ties to the gas industry.

    CiviClick denies that it sent any email without consent or that it used AI to fabricate automated messages. The agency is still investigating the situation; the executive director said in a recent meeting that the team was exploring more “aggressive” ways of sampling commenters, since it couldn’t draw definitive conclusions from the limited initial response.

    Regardless of what happened, it points to a broader question: If artificial intelligence can now easily impersonate humans—and if comments can be submitted without someone’s knowledge—how can government agencies actually know when a public comment was written by a citizen rather than a bot?

    Fake comments aren’t new. In 2017, the Federal Communications Commission received 22 million comments during the debate on net neutrality rules—and around 18 million of them were later found to be fake. Millions came from a single college student, while half a million came from Russian email addresses. After an investigation, New York Attorney General Letitia James fined six “lead generator” companies that had collectively impersonated millions of real people when they submitted comments.

    AI, in theory, could make it easier to write and submit fake comments that sound real. CiviClick says that it simply uses AI to help real people personalize their comments. The platform asks users questions related to the issue—for example, how an increase in taxes would affect their budget—and then tailors an email. (The company also uses AI to predict how likely someone would be to respond to a campaign.)

    CiviClick founder and CEO Chazz Clevinger says he could not speak to the specifics of the Southern California campaign but insists it meaningfully captured the authentic views of people across the region. “A homeowner in Riverside County who had recently installed a gas furnace wrote a different message than a renter in Los Angeles who was concerned about landlord compliance costs,” he tells Fast Company. “A contractor in San Bernardino County who builds new homes wrote a different message than a retiree in Orange County worried about electricity grid strain during heat waves.” Clevinger argues that the tool is simply helping people “articulate their genuine concerns,” and that they’re no less legitimate than messages written from scratch.

    The Sierra Club campaigner has a different take. Even if someone consents to have AI tweak a comment, it could be problematic. “Regulators give priority to customized comments, which require time and effort to send, versus form letters or petitions, which do not,” says Dylan Plummer, campaign adviser for the Sierra Club’s “Clean Heat” campaign. “Using AI to generate custom comments creates the illusion of engaged individuals willing to spend the time to draft a thoughtful statement on an issue—when in fact, they are engaged at the same level as someone who signed a traditional form letter or petition.”

    The bigger challenge, Plummer says, is whether some public comments are attributed to people who never had anything to do with them. In another case in California, he started calling people who had submitted comments on a proposed rule at the Bay Area Air District. Another nonprofit, the Energy and Policy Institute, filed a public records request to get copies of emails that were sent using a different software platform called Speak4. (Speak4 declined to talk to Fast Company; in a San Francisco Chronicle article, the company’s client, the Bay Area Council, said that neither it nor Speak4 submitted letters without consent.)

    Of the seven people that Plummer spoke with, all seven said that they had no knowledge of the email. “Some even said that they didn’t know what the Bay Area Air District was,” he says. “One woman I spoke to said, ‘Why would I ever oppose regulations to protect clean air?’”

    It’s very difficult to prove whether comments are actually fake after the fact. “I had to call dozens and dozens of numbers that I was able to access through internet sleuthing,” Plummer says. Most people didn’t want to talk. “When I’m talking, I’m like, ‘Hi, my name is Dylan, and I’m investigating a potential case of identity theft.’ And their first response is, ‘Oh, this guy’s totally a scammer,’ and they hang up.”

    In another case in North Carolina, county commissioners received hundreds of emails in support of a new gas pipeline. But when they started to respond to some of the emails, their constituents said that they hadn’t sent them. The mass email campaign backfired. “If they’re this sloppy with their advocacy work, what does that say about our concerns about their maintenance, which is the critical thing,” one commissioner told E&E News. The board voted unanimously for a resolution that raised concerns about the project and recommended that federal officials should deny a permit.

    Williams, the company that wanted to build the pipeline, suggested that people might have forgotten that they sent an email. CiviClick, which facilitated the emails for the company, said the same thing about the campaign in Southern California. (It’s worth noting that the air quality agency contacted the supposed commenters shortly after the comments were submitted, however.) Clevinger also suggested that there could be “deliberate mischaracterization or misuse of our tools” by groups like the Sierra Club that “have a vested interest in discrediting its authenticity.”

    When agencies do receive a flood of fake emails, it’s not clear how much that necessarily affects decision-making. “What matters is not the identity of the commenter,” says Steven Balla, a political science professor at George Washington University, who studies public commenting. “What matters is the content of the comment.” Agencies are charged with considering the technical, legal, and economic information that’s submitted to them during the comment process, he says. But they’re not adding up how many comments they got on each side, and it’s the ideas that matter more than the name of the person who submitted them.

    Fake or AI-generated comments “smell icky,” Balla says. “But I haven’t yet been moved that, wow, this is totally changing the way policy decisions are made.” In the case of net neutrality, he argues, the millions of comments didn’t ultimately sway what the first Trump administration wanted to do.

    “What I know about misinformation more generally is that misinformation generally has minimal effects on what people believe or what they do,” says Jonathan Brennan, director of the Center on Technology Policy at New York University. “I’d be far more concerned about the secondary effects of a general loss in trust— government officials saying, ‘Well, we can’t really trust any public comments, maybe they’re all fake, maybe they’re not, so we’re just going to give them less weight.’” A local school board, for example, might theoretically listen more to people who show up to comment in person, making it harder for others to share their opinion if they don’t have time to attend.

    Agencies can use technology to sort through digital comments and summarize duplicates, Balla says. That’s different from older mass comments that showed up on postcards. “Back in the old days in the ’90s, I was talking to an agency that got at that time maybe 100,000 comments,” he says. “Those were still paper-based. They literally had some warehouse space out in Rockville, Maryland, where they were basically putting the pieces of paper into piles. That was a lot of work. Now you get 100,000 comments, and 99,000 of them are going to be nearly identical. And you can figure that out in seconds.”

    Still, if AI can easily generate a series of unique comments, the process could get harder. The Sierra Club’s Plummer suggests that something needs to change. “Astroturfing and the creation of front groups—a polluting industry working to create the illusion of widespread support for a position—is nothing new,” he says. “Our big concern, though, is that these new technologies, with AI proliferating, are going to put these tactics on steroids and make them even more insidious and difficult to root out. And it is, in my opinion, a direct threat to democratic processes and decision-making.”

    At the South Coast Air Quality Management District, the board voted narrowly to defeat the proposed rule that would have curbed pollution. Though CiviClick touted its work in influencing the decision, it’s hard to say what impact the comments had. The board directed the agency to send the rule back to a committee for further discussion. The rule could be revisited later, though no timeline has been set.

    Now, the Sierra Club is asking California’s attorney general and L.A.’s district attorney to launch a fraud investigation. State Sen. Christopher Cabaldon recently introduced a bill called “People Not Bots,” which would clarify that AI tools don’t qualify as people and shouldn’t be offering fake public input.

    And at the air quality agency in Southern California, staff are exploring ways to make comment submission more secure, including portals that could offer new ways to verify that a submission is coming from a human—though that’s a harder and harder task to perform. “Maintaining the integrity of our public process is a top priority,” SCAQMD’s Yeung says.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    April 29, 2026

    Social media’s big tobacco moment is just a first step

    April 29, 2026

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    April 29, 2026
    Top News

    Why most strategic plans fail just as often as New Year’s resolutions

    By Staff WriterJanuary 9, 2026

    Strategic planning is a big business. Companies spend millions of dollars working with consulting firms…

    The internet mourns Teen Vogue, a magazine brand that ‘took young people seriously’

    November 6, 2025

    The anti-boredom tech tool kit for meetings and classes

    March 29, 2026

    29% Of Low-Income American Households Live Paycheck-to-Paycheck

    November 25, 2025
    Top Trending

    Market Talk – April 29, 2026

    By Staff WriterApril 29, 2026

    ASIA: The major Asian stock markets had a mixed day today: •…

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    By Staff WriterApril 29, 2026

    Uber Technologies is doing everything it can to save its customers’ time,…

    Social media’s big tobacco moment is just a first step

    By Staff WriterApril 29, 2026

    Many commentators have called March’s California jury verdict, finding Meta and Google…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin serves as a beacon for the populist movement, which champions the interests of ordinary citizens over the agendas of the powerful and entrenched elitists. Rooted in the belief that the voices of everyday workers, families, and communities are often drowned out by powerful people and institutions, it delivers straightforward, unfiltered, compelling, relatable stories that resonate with the values of the American public.

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, inequality, government accountability and overreach, globalization, and the preservation of American cultural heritage.

    The site offers a dynamic mix of investigative journalism, opinion editorials, and viral content that amplify populist sentiments and deliver stories that echo the concerns of everyday Americans while boldly challenging mainstream narratives that serve the privileged few.

    Top Picks

    Market Talk – April 29, 2026

    April 29, 2026

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    April 29, 2026

    Social media’s big tobacco moment is just a first step

    April 29, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.