Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • Netflix beats revenue estimates as subscriber count climbs to 325 million
    • Indiana Hoosiers’ college football championship by the numbers
    • How anti-doomscrolling influencers are combating social media addiction
    • Millionaires are sounding the alarm about democracy — and blaming people like themselves
    • 5 ways to finish what you started, according to a productivity expert
    • 5 reasons why you should laugh more and not take yourself so seriously
    • Market Talk – January 20, 2026
    • How to build your deep reading and critical thinking skills to better resist misinformation
    Compatriot Chronicle
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Compatriot Chronicle
    Home»Business»‘AI for America’ wants to be a New Deal for workers and communities. But it needs teeth
    Business

    ‘AI for America’ wants to be a New Deal for workers and communities. But it needs teeth

    October 1, 202511 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Curing cancer. Reducing carbon emissions. Maximizing business efficiency. To achieve all this and develop untold social goods, artificial intelligence “accelerationsts” at companies like Google, Meta, and OpenAI believe their industry has a duty to speed ahead toward superintelligence, or AI that’s far superior to humans at most tasks. Key to that revolution will be the build-out of data centers.

    Meanwhile, a technical transformation of the workplace already appears to be underway. The nation’s largest employer, Walmart, said that because of its AI implementation, hiring will remain flat over the next three years even if revenues rise. Every business—not just the big ones—will eventually reckon with some version of that transformation. Whoever wields the technology best will get an edge. Regulators, in turn, must find forward-looking ways of controlling the excesses of the winners while mitigating the hardship of the losers—and fast.

    Sen. Mark Kelly fears that the biggest losers could be working-class people. The Arizona Democrat’s “AI for America” plan, arguably the most comprehensive Democratic answer to the Trump administration’s pro-industry “AI Action Plan,” would create an industry-financed “AI Horizon Fund” to pay for energy-grid upgrades and workforce reskilling. 

    But while Kelly’s plan is admirable it dodges the policy specifics necessary for real legislation. He also fails to grasp certain economic and political realities of the AI industry and its players. And the federal government, as it heads for a shutdown, seems far from capable of passing any thoughtful AI legislation.

    Here, we attempt to fill in these gaps. 

    Data centers everywhere

    The AI models poised to reshape business practices reside on servers humming away in the dark within massive single-story buildings called data centers. 

    New data centers represent the most tangible sign of the so-called AI boom. Most estimates say that there are more than 500 “hyperscale” data centers, housing tens or hundreds of thousands of servers, in operation in the U.S. today. Between 50 and 100 more are either licensed or under construction in 2025. 

    Kelly’s home state of Arizona is regarded as one of the most attractive places for data center projects because of the low cost, reliable power, affordable land, easy permitting, and tax incentives. (Apple, Google, Microsoft, and Amazon Web Services (AWS) already have data centers there.)  

    States compete to attract data center projects. They come at a cost. Over the past five years, 10 states have already lost more than $100 million per year to data center tax abatements, with Texas and Virginia each giving away roughly $1 billion, according to a study by Good Jobs First, an economic development policy advocacy group. 

    Per the study, a total of 32 states offer such exemptions to Big Tech companies and their partners; 12 states don’t disclose the exemption amounts, which makes calculating a national total difficult. But it’s in the billions, and climbing. Whether all this investment truly delivers in the long run remains unclear. 

    The infrastructure gap

    All this is causing Arizona citizens to ask more about these projects. In August, Tucson rejected “Project Blue,” a proposal to build a 290-acre AWS data center near the city. They cited concerns over water use, the potential burden on the local power grid, and the possibility of spiking electricity rates to fund additional power infrastructure. Deloitte estimates that the power demand from AI data centers in the U.S. could grow to about 123 gigawatts by 2035, up from roughly 4 gigawatts required in 2024.

    The problem is that the existing power grid was built to serve households and businesses, not legions of sprawling data centers. When a new center goes up, the local or regional energy supplier typically must augment the capacity of the grid to meet the demand. And those infrastructure costs are often passed on to residents through rate hikes or tax increases. Those same tax increases and electricity rate hikes could hit businesses in the area, including small businesses.

    Who should pay?

    Kelly believes that AI companies should pay for energy infrastructure upgrades necessitated by data center power demand. But his proposal offers no mechanism for metering the AI companies’ financial obligation or the amount they should pay into the fund for infrastructure augmentation. 

    Making this workable would require working with utilities and state and local energy regulators to determine a fair fee. To pay for infrastructure upgrades, Kelly could require a small but significant percentage of every megawatt of power purchased by the data center operator to go into a hypothetical fund.

    Congress could also require data center developers to buy or lease enough land to contain both their facilities and the renewable energy infrastructure to power and cool them. The data center operators could also be required to pay to connect the renewable sources to the local grid, should the power they generate go unutilized. Elon Musk’s xAI, for example, brought its own power to its massive Colossus data center in Memphis. Unfortunately, they were dirty methane-powered turbines, and the facility quickly became one of the area’s biggest polluters—a cautionary tale.

    For a city and its utility, the biggest fear is that an AI data center could pick up and go, in pursuit of more permissive environmental laws or cheaper power rates, leaving behind an empty hulk and suddenly unemployed local workers. Establishing federal-level environmental guidelines and power-grid responsibilities could remove some of the incentives to leave, forcing data-center operators to consider that at least some of those costs would be the same no matter where they went. 

    Reskilling, but make it AI

    Tech companies often say that in their ideal world, humans will work in tandem with AI tools, and that new jobs will emerge that require some skill with these technologies as old ones are eliminated. Arriving at the right balance will likely take years. Because of the ongoing, rapid advances in AI, the process may never truly end.

    In the near term, the biggest beneficiaries are likely to be the companies selling the tools. Kelly argues, reasonably, that the AI companies should  help pay for the costs of job displacement and reskilling workers. He suggests that the AI Horizon Fund be used to pay for AI education programs at community colleges, trade schools, and universities. 

    Kelly also believes that the government should pass laws to make sure that workers themselves benefit from AI efficiencies. This could mean “reimagining what the workweek looks like,” as well as policies to strengthen worker bargaining power through stronger union representation.  

    Kelly suggests “an AI economic adjustment program that includes an expanded safety net, including more generous unemployment insurance.” While that’s a good starting point, the approach suggests that job displacement needn’t be a permanent situation, and that workers can be reskilled for the new workforce. In some industries, for some workers, especially older ones, this might simply not be true.

    A permanent fund

    Estimating the amount of AI-linked job loss and the resulting need for reskilling, as well as how much AI companies should pay in redress, would be difficult. The Labor Department would need new tracking tools and new methods of compelling businesses to report the AI-related impact on the workforce. As an alternative, the amount AI companies would pay into the AI Horizon Fund for education and safety net costs could also be determined by the gigawatts of power consumed in their data centers to train or power models.

    There’s another limitation to Kelly’s plan: By focusing solely on “AI companies” to pay into his fund, it ignores the ecosystem of services and resources needed to make it all work. One need only look at recent headlines to understand who the current winners are. While neither OpenAI nor Anthropic is profitable, both are spending billions on Nvidia GPUs, the superfast graphics processing units necessary to power AI models. Nvidia will also invest $100 billion to buy a piece of OpenAI, with the timing of the investment dependent on how fast OpenAI deploys the new chips. Perhaps Nvidia should be asked to pay into the AI Horizon Fund? 

    OpenAI also just inked a $300 billion deal to buy cloud-computing capacity (in the form of data centers) from Oracle. (After the deal was announced, Oracle briefly became the most valuable tech company in the world, before being overtaken once again by Nvidia.) Oracle and other cloud-computing providers like Microsoft, AWS, and Google could pay into a federal fund to cover the downsides of AI. 

    Venture capital firms like Andreessen Horowitz and Josh Kushner’s Thrive Capital are betting unprecedented sums on AI startups and stand to make many multiples of their investments before the founders and employees get their cuts. It’s reasonable, then, to suggest that the financiers help pay for the broader effects of the technologies in which they invest.

    A fund with teeth

    Perhaps the biggest weakness in Kelly’s plan is its failure to explain how Washington would enforce contributions to the fund. But there are options.

    The federal government could mandate that AI companies and their partners contribute and impose penalties on those who seek to evade their obligations. The Energy Department or the Federal Energy Regulatory Commission could require data center operators with high grid demand to pay for a portion of transmission and distribution upgrades as a condition of connecting to the power grid, drawing inspiration from the Highway Trust Fund or the Universal Service Fund. AI companies, along with data center operators, might contribute to a similar federal fund to pay for grid upgrades and ensure that energy rates don’t rise as a result of their demands on the grid. 

    Kelly appears to favor a public-private partnership approach. To make that work, the government could set up a program with certain mandates while enticing participation through incentives. It might start by setting up an industry association that includes AI companies, utilities, and developers, which would establish a jointly governed trust fund to support power grid improvements and new AI reskilling programs. The government might then offer tax breaks or fund matching to push the stakeholders to contribute. 

    This approach could ease the government’s enforcement burden. It’s also more realistic: The tech industry isn’t accustomed to following federal mandates that cost real money. 

    The safety question

    The debate over AI safety has drawn out the tension between accelerationists, who don’t want to see their sector’s momentum slowed for any reason, and those who fear the technology’s excesses. The debate has played out in California, where Gov. Gavin Newsom on Monday signed a new landmark AI bill that requires AI companies developing the largest models to create and publicize a set of AI safety and security protocols.

    In 2024, VCs and AI companies complained about California’s SB 1047, the state’s first major AI safety bill, which focused on the biggest AI models. The AI industry opposed the bill’s requirement that AI companies file periodic safety reports to the state and objected to a provision holding AI developers legally liable for critical harms caused by their models. The bill sailed through the legislature before Newsom vetoed it, arguing that the legislation is overbroad and should focus more on the way AI models are applied.  

    This year, state Sen. Scott Weiner (D-CA) introduced a new AI safety bill, SB 53, that incorporated the lessons learned from the 2024 bill’s defeat. It emphasized transparency, requiring that AI companies establish formal safety frameworks, publish their underlying protocols, and report “critical safety incidents.”

    With that bill now becoming law—in the AI industry’s home state, no less—it could serve as a model for striking a workable balance between reasonable safety and the drive to pursue AI’s rewards. Kelly and his fellow lawmakers could learn a lot from the process.

    A political machine

    Give Kelly credit for trying to get the government in front of the next technology wave, after missing the bus entirely with social media. Yet with a hostile political environment and a Democratic party stuck in the wilderness for now, little progress is likely. Trump’s AI plan prioritizes clearing regulatory and funding hurdles for the quick rollout of new data centers. Meanwhile, the AI industry’s influence in Washington, D.C., grows, with the chief aim of killing meaningful AI safety regulation. If some version of Kelly’s plan has any shot (likely as part of some other must-pass legislation), he’ll have to convince his Republican colleagues that now is the time to shape the future of this industry and its impact on workers. 

    Gaining that support isn’t as far-fetched as it might sound. Trump’s overall agenda is failing with Americans. Midterm elections are coming. Real oversight may come back into fashion. More importantly, strong AI regulation that protects workers has real populist appeal, something the GOP claims to crave. 

    Survey after survey has shown that both red and blue voters want the government to play a strong role in protecting jobs from AI and ensuring AI safety. Kelly is speaking to them. “Our society thrives when employment is high and income inequality is low,” he writes in the plan. “Our solutions must recognize the value of work, and that some tasks are uniquely human.” 



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Netflix beats revenue estimates as subscriber count climbs to 325 million

    January 21, 2026

    Indiana Hoosiers’ college football championship by the numbers

    January 21, 2026

    How anti-doomscrolling influencers are combating social media addiction

    January 21, 2026
    Top News

    How Trump is blocking U.S. states from regulating artificial intelligence

    By Staff WriterDecember 12, 2025

    President Donald Trump signed an executive order Thursday aimed at blocking states from crafting their…

    Why Do Large Companies Keep Doing This?

    August 30, 2025

    Your next STI test could come from DoorDash

    November 12, 2025

    Market Talk – September 16, 2025

    September 16, 2025
    Top Trending

    Netflix beats revenue estimates as subscriber count climbs to 325 million

    By Staff WriterJanuary 21, 2026

    Netflix exceeded Wall Street’s revenue estimates for its holiday quarter, as it…

    Indiana Hoosiers’ college football championship by the numbers

    By Staff WriterJanuary 21, 2026

    The state of Indiana is no stranger to underdog stories. Hoosiers and Rudy,…

    How anti-doomscrolling influencers are combating social media addiction

    By Staff WriterJanuary 21, 2026

    It’s simple to accidentally become entranced by an endless loop of videos…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin serves as a beacon for the populist movement, which champions the interests of ordinary citizens over the agendas of the powerful and entrenched elitists. Rooted in the belief that the voices of everyday workers, families, and communities are often drowned out by powerful people and institutions, it delivers straightforward, unfiltered, compelling, relatable stories that resonate with the values of the American public.

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, inequality, government accountability and overreach, globalization, and the preservation of American cultural heritage.

    The site offers a dynamic mix of investigative journalism, opinion editorials, and viral content that amplify populist sentiments and deliver stories that echo the concerns of everyday Americans while boldly challenging mainstream narratives that serve the privileged few.

    Top Picks

    Netflix beats revenue estimates as subscriber count climbs to 325 million

    January 21, 2026

    Indiana Hoosiers’ college football championship by the numbers

    January 21, 2026

    How anti-doomscrolling influencers are combating social media addiction

    January 21, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.