Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • You can’t recall AI like a defective drug
    • Dollar General closed hundreds of locations after evaluating its store footprint. But there’s an upside
    • Bumble stock is up today. Whitney Wolfe Herd’s solution to ‘swipe fatigue’ might be part of the reason why
    • This new foldable phone may have upstaged Apple in the ‘zero-crease’ wars
    • The X algorithm really is trying to radicalize you—researchers just proved it
    • How silicone wristbands can help scientists monitor ‘forever chemicals’
    • The Pentagon–Anthropic clash is a warning for every enterprise AI buyer
    • Trump, London, Netanyahu, & Neocons
    Compatriot Chronicle
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Compatriot Chronicle
    Home»Business»AI’s biggest problem isn’t intelligence. It’s implementation
    Business

    AI’s biggest problem isn’t intelligence. It’s implementation

    February 19, 20267 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.

    The AI ‘arms race’ may be more of an ‘arm-twist’

    The big AI companies tell us that AI will soon remake every aspect of business in every industry. Many of us are left wondering when that will actually happen in the real world, when the so-called “AI takeoff” will arrive. But because there are so many variables, so many different kinds of organizations, jobs, and workers, there’s no satisfying answer. In the absence of hard evidence, we rely on anecdotes: success stories from founders, influencers, and early adopters posting on X or TikTok.

    Economists and investors are just as eager to answer the “when” question. They want to know how quickly AI’s effects will materialize, and how much cost savings and productivity growth it will generate. Policymakers are focused on the risks: How many jobs will be lost, and which ones? What will the downstream effects be on the social safety net?

    Business schools and consulting firms have turned to research to find those answers the question. One of the most consequential recent efforts was a 2025 MIT study, which found that despite spending between $30 billion and $40 billion on generative AI, 95% of large companies had seen “no measurable P&L [profit and loss] impact.”

    More recent research paints a somewhat rosier picture. A recent study from the Wharton School found that three out of four enterprise leaders “reported positive returns on AI investments, and 88% plan to increase spending in the next year.”

    My sense is that the timing of AI takeoff is hard to grasp because adoption is so uneven and depends a lot on the application of the AI. Software developers, for example, are seeing clear efficiency gains from AI coding agents, and retailers are benefiting from smarter customer-service chatbots that can resolve more issues automatically.

    It also depends on the culture of the organization. Companies with clear strategies, good data, some PhDs, and internal AI enthusiasts are making real progress. I suspect that many older, less tech-oriented, companies remain stuck in pilot mode, struggling to prove ROI. 

    Other studies have shown that in the initial phases of deployment, human workers must invest a lot of time correcting or training AI tools, which severely limits net productivity gains. Others show that in AI-forward organizations, workers do see substantial productivity improvements, but because of that, they become more ambitious and end up working more, not less.

    The MIT researchers included an interesting disclaimer on their research results. Their sobering findings, they noted, did not reflect the limitations of the AI tools themselves, but rather the fact that organizations often need years to adapt their people and processes to the new technology.

    So while AI companies constantly hype the ever-growing intelligence of their models, what ultimately matters is how quickly large organizations can integrate those tools into everyday work. The AI revolution is, in this sense, more of an arm-twist than an arms race. The road to ROI runs through people and culture. And that human bottleneck may ultimately determine when the AI industry, and its backers, begin to see returns on their enormous investments.

    New benchmark finds that AI fails to do most digital gig work

    AI companies keep releasing smarter models at a rapid pace. But the industry’s primary way of proving that progress—benchmarks—doesn’t fully capture how well AI agents perform on real-world projects. A relatively new benchmark called the Remote Labor Index (RLI) tries to close that gap by testing AI agents on projects similar to those given to remote contractors. These include tasks in game development, product design, and video animation. Some of the assignments, based on actual contract jobs, would take human workers more than 100 hours to complete and cost over $10,000 in labor.

    Right now, some of the industry’s best models don’t perform very well on the RLI. In tests conducted late last year, AI agents powered by models from the top AI developers including OpenAI, Anthropic, Google, and others could complete barely any of the projects. The top-performing agent, powered by Anthropic’s Opus 4.5 model, completed just 3.5% of the jobs. (Anthropic has since released Opus 4.6, but it hasn’t yet been evaluated on the RLI.)

    The test puts the question of the current applicability of agents in a different light, and may temper some of the most bullish claims about agent effectiveness coming from the AI industry. 

    Silicon Valley’s pesky ‘principals’ re-emerge, irking the White House and Pentagon

    The Pentagon and the White House are big mad at the safety-conscious AI company Anthropic. Why? Because Anthropic doesn’t want its AI being used for the targeting of humans by autonomous drones, or for mass surveilling U.S. citizens. 

    Anthropic now has a $200 million contract allowing the use of its Claude chatbot and models by federal agency workers. It was among the first companies to get approval to work with sensitive government data, and the first AI company to build a specialized model for intelligence work. But the company has long had clear rules in its user guidelines that its models aren’t to be used for harm. 

    The Pentagon believes that after paying for the technology it should be able to use it for any legal application. But acceptable use for AI is different from that for traditional software. AI’s potential for autonomy makes it more dangerous by nature, and its risks increase the closer to the battle it gets used. 

    The disagreement, if not resolved, could potentially jeopardize Anthropic’s contract with the government. But it could get worse. Over the weekend, the Pentagon said it was considering classifying Anthropic as a “supply chain risk,” which would mean the government views Anthropic as roughly as trustworthy as Huawei. Government contractors of all kinds would be pushed to stop using Anthropic.

    Anthropic’s limits on certain defense-related uses are laid out in its Constitution, a document that describes the values and behaviors it intends its models to follow. Claude, it says, should be a “genuinely good, wise, and virtuous agent.” “We want Claude to do what a deeply and skillfully ethical person would do in Claude’s position.” To critics in the Trump administration, that language translates to a mandate for wokeness.

    The whole dust-up harkens back to 2018, when Google dropped its Project Maven contract with the government after employees revolted against Google technology being used for targeting humans in battle. Google still works with the government, and has softened its ethical guidelines over the years.

    The truth is, tech companies don’t stand on principle like they used to. Many have settled into a kind of patronage relationship with the current regime, a relatively inexpensive way to avoid MAGA backlash while keeping shareholders satisfied. Anthropic, in its way, seems to be taking a different course, and it may suffer financially for it. But, in the longer term, the company could earn some respect, trust, and goodwill from many consumers and regulators. For a company whose product is as powerful and potentially dangerous as consumer AI, that could count for a lot. 

    More AI coverage from Fast Company: 

    • OpenAI, Google, and Perplexity near approval to host AI directly for the U.S. government
    • New AI models are losing their edge almost immediately
    • Meta patents AI that lets dead people post from the great beyond
    • These 6 quotes from OpenClaw creator Peter Steinberger hint at the future of personal computing

    Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    You can’t recall AI like a defective drug

    March 12, 2026

    Dollar General closed hundreds of locations after evaluating its store footprint. But there’s an upside

    March 12, 2026

    Bumble stock is up today. Whitney Wolfe Herd’s solution to ‘swipe fatigue’ might be part of the reason why

    March 12, 2026
    Top News

    They helped Dave’s Hot Chicken become a sensation. Now they say this fast-casual taco restaurant could be the next Chipotle

    By Staff WriterFebruary 18, 2026

    The future looks green for Mike’s Red Tacos. The San Diego-based taco restaurant currently has…

    Inside Apple’s Bad Bunny Super Bowl halftime show strategy

    January 31, 2026

    UPS is closing package facilities: See the list of doomed locations across several states in 2026

    February 19, 2026

    I asked chatbots to find me cheap flights over the holidays. The results were mixed

    October 11, 2025
    Top Trending

    You can’t recall AI like a defective drug

    By Staff WriterMarch 12, 2026

    At a recent AI summit in New Delhi, Sam Altman warned that…

    Dollar General closed hundreds of locations after evaluating its store footprint. But there’s an upside

    By Staff WriterMarch 12, 2026

    Dollar General’s fourth-quarter and full-year 2026 earnings report shows some successes—though you…

    Bumble stock is up today. Whitney Wolfe Herd’s solution to ‘swipe fatigue’ might be part of the reason why

    By Staff WriterMarch 12, 2026

    Shares in Bumble Inc. (Nasdaq: BMBL), maker of the Bumble dating app,…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin serves as a beacon for the populist movement, which champions the interests of ordinary citizens over the agendas of the powerful and entrenched elitists. Rooted in the belief that the voices of everyday workers, families, and communities are often drowned out by powerful people and institutions, it delivers straightforward, unfiltered, compelling, relatable stories that resonate with the values of the American public.

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, inequality, government accountability and overreach, globalization, and the preservation of American cultural heritage.

    The site offers a dynamic mix of investigative journalism, opinion editorials, and viral content that amplify populist sentiments and deliver stories that echo the concerns of everyday Americans while boldly challenging mainstream narratives that serve the privileged few.

    Top Picks

    You can’t recall AI like a defective drug

    March 12, 2026

    Dollar General closed hundreds of locations after evaluating its store footprint. But there’s an upside

    March 12, 2026

    Bumble stock is up today. Whitney Wolfe Herd’s solution to ‘swipe fatigue’ might be part of the reason why

    March 12, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.