Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • Social media’s big tobacco moment is just a first step
    • Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes
    • Google, TikTok and Meta could be taxed by Australia to fund its newsrooms
    • MacKenzie Scott says we underestimate the impact of small acts of kindness. Science agrees
    • Trump says Iran ‘better get smart soon’ as economies deal with skyrocketing energy prices
    • A key weapon in America’s ‘Golden Dome’ defense shield is taking shape
    • How F1 is revving up its U.S. takeover at the Miami Grand Prix
    • Why the hardest part of building the future is letting go of the past
    Compatriot Chronicle
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Compatriot Chronicle
    Home»Business»20 seconds to approve a military strike; 1.2 seconds to deny a health insurance claim. The human is in the AI loop. Humanity is not
    Business

    20 seconds to approve a military strike; 1.2 seconds to deny a health insurance claim. The human is in the AI loop. Humanity is not

    April 7, 20267 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email

    In the first 24 hours of the war with Iran, the United States struck a thousand targets. By the end of the week, the total exceeded 3,000—twice as many as in the “shock and awe” phase of the 2003 invasion of Iraq, according to Pete Hegseth. This unprecedented number of strikes was made possible by artificial intelligence. The U.S. Central Command (Centcom) insists that humans remain in the loop on every targeting decision, and that the AI is there to help them to make “smarter decisions faster.” But exactly what role humans can play when the systems are operating at this pace is unclear.

    Israel’s use of AI-enabled targeting in its war on Hamas may offer some insights. An investigation last year reported that the Israeli military had deployed an AI system called Lavender to identify suspected militants in Gaza. The official line is that all targeting decisions involved human assessment. But according to one of Lavender’s operators, as the humans involved came to trust the system, they limited their own checks to nothing more than confirming that the target was a male. “I would invest 20 seconds for each target,” the operator said. “I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”

    The same pattern has already taken hold in business. In 2023, ProPublica revealed that Cigna, one of America’s largest health insurers, had deployed an algorithm to flag claims for denial. Its physicians, who were legally required to exercise their clinical judgment, signed off on the algorithm’s decisions in batches, spending an average of 1.2 seconds on each case. One doctor denied more than 60,000 claims in a single month. “We literally click and submit,” a former Cigna doctor said. “It takes all of 10 seconds to do 50 at a time.”

    Twenty seconds to approve a strike; 1.2 seconds to deny a claim. The human is in the loop. Humanity is not.

    {“blockType”:”mv-promo-block”,”data”:{“imageDesktopUrl”:”https://images.fastcompany.com/image/upload/f_webp,q_auto,c_fit/wp-cms-2/2025/10/creator-faisalhoque.png”,”imageMobileUrl”:”https://images.fastcompany.com/image/upload/f_webp,q_auto,c_fit/wp-cms-2/2025/10/faisal-hoque.png”,”eyebrow”:””,”headline”:”Ready to thrive at the intersection of business, technology, and humanity?”,”dek”:”Faisal Hoque’s books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and tech—turning disruption into meaningful, lasting progress.”,”subhed”:””,”description”:””,”ctaText”:”Learn More”,”ctaUrl”:”https://faisalhoque.com”,”theme”:{“bg”:”#02263c”,”text”:”#ffffff”,”eyebrow”:”#9aa2aa”,”subhed”:”#ffffff”,”buttonBg”:”#ffffff”,”buttonHoverBg”:”#3b3f46″,”buttonText”:”#000000″},”imageDesktopId”:91420512,”imageMobileId”:91420514,”shareable”:false,”slug”:””,”wpCssClasses”:””}}

    Difficulty by Design

    The novelist Milan Kundera writes of the terrifying weight of being confronted with the enduring seriousness of our actions. But while lightness might seem attractive in the face of this impossibly heavy burden, it is ultimately unbearable. Disconnection from the weightiness of our decisions deprives them of substance, of meaning.

    AI promises to lift the burden of difficult and cognitively demanding work—it makes the work lighter. Decisions become quicker and easier. In many domains, that is genuine progress. But some decisions are important enough that we ought to feel their weight. It ought to take time to decide to kill a person or deny a healthcare claim. It ought to be difficult to figure out which buildings to bomb. In such decisions, the difficulty serves a function—it is a feature, not a bug. It is a mechanism that forces institutions to reckon with what they are doing. When AI removes that weight, the institution doesn’t become more efficient. It becomes numb. When AI takes away the burden of making decisions about who lives and who dies, this is not progress. This is moral degradation.

    If the human in the loop is spending mere seconds on each decision, then the question of whether the system is autonomous or human-supervised becomes largely semantic. We need to insist on humanity in the loop as well. In cases like these, the human must be allowed to be human, even if that means they are slower, less accurate, and less efficient. That is the cost we pay for something absolutely necessary: We need the human to feel the weight of the decisions they are making, because difficulty creates the friction that makes people pause, question, and push back.

    Institutional Culture

    When hard decisions become easy, the institution itself changes. People stop questioning because there is nothing that feels worth questioning—the system has already decided, and the human’s role is to confirm. Dissent drops because dissent requires friction, and friction has been engineered out. Accountability is undermined because everyone knows that it’s the computer that’s making the decisions.

    The Cigna physician who denied 60,000 claims in a month was not cruel. She had been placed in a system where denying a claim required no more effort than clicking a button. The system did something more insidious than corrupt her judgment—it made it unnecessary. That is why the Cigna case is not a story about a single bad actor. Rather, it is a story about what happens to any institution that systematically engineers the weight out of its hardest decisions.

    The Cost of Hollowing Out Accountability

    Hollowed-out accountability has a cost that shows up in three places for businesses.

    First, liability. An algorithm cannot be sued, fired, or held responsible for its errors. The organization that deployed it can. Rubber-stamp oversight is not a legal gray area—it is a liability waiting for lawyers to mobilize.

    Second, institutional fragility. When humans stop genuinely engaging with decisions, they stop learning from them. When the machine always seems to get things right, no one develops the kind of judgment needed to determine when it is actually wrong. Organizations that optimize humans out of their decision loops become dependent on systems they no longer fully understand. And this leads to brittleness in precisely the moments that demand resilience.

    Third, trust. Customers, employees, and regulators may want to know whether an AI made a decision. But they will definitely want to know if anyone is truly responsible for it. The answer, in too many organizations, is no, and that answer has deep consequences for the organization’s relationships with those it is answerable to.

    The Weight Test

    Before using AI to make any decision process easier, leaders should ask four questions.

    1. What institutional behaviors does the current difficulty of this decision produce—e.g., scrutiny, escalation, dissent—and what is the cost of losing them?

    2. If something goes wrong, can we identify someone who wrestled with the decision—or only someone who clicked approve?

    3. How would we know if the humans in this process have become rubber stamps? What would we measure, and are we measuring it?

    4. If the people affected by this decision learned exactly how it was made and how long the human spent on it, would the institution be comfortable defending that process in public?

    These questions won’t appear in any AI vendor’s implementation checklist. That is precisely why they matter.

    Conclusion

    We are told that AI liberates us—from drudgery, from slow processes, from the burden of hard decisions. And often it does. But not every burden is a problem to be solved. Sometimes, the burdens are the point. The weight a commander should feel before authorizing a strike, the effort a physician expends before denying care—these are not inefficiencies to be optimized away. They are the mechanisms that keep institutions honest about the power they exercise.

    Of course, organizations that engineer that weight away will be faster and lighter. For a while, it may even appear like they are winning. But these organizations will also be the ones that discover, too late, that the difficulty was the price of being the one who decides—and the moment an organization stops paying it, it has no business deciding at all.

    {“blockType”:”mv-promo-block”,”data”:{“imageDesktopUrl”:”https://images.fastcompany.com/image/upload/f_webp,q_auto,c_fit/wp-cms-2/2025/10/creator-faisalhoque.png”,”imageMobileUrl”:”https://images.fastcompany.com/image/upload/f_webp,q_auto,c_fit/wp-cms-2/2025/10/faisal-hoque.png”,”eyebrow”:””,”headline”:”Ready to thrive at the intersection of business, technology, and humanity?”,”dek”:”Faisal Hoque’s books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and tech—turning disruption into meaningful, lasting progress.”,”subhed”:””,”description”:””,”ctaText”:”Learn More”,”ctaUrl”:”https://faisalhoque.com”,”theme”:{“bg”:”#02263c”,”text”:”#ffffff”,”eyebrow”:”#9aa2aa”,”subhed”:”#ffffff”,”buttonBg”:”#ffffff”,”buttonHoverBg”:”#3b3f46″,”buttonText”:”#000000″},”imageDesktopId”:91420512,”imageMobileId”:91420514,”shareable”:false,”slug”:””,”wpCssClasses”:””}}



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Social media’s big tobacco moment is just a first step

    April 29, 2026

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    April 29, 2026

    Google, TikTok and Meta could be taxed by Australia to fund its newsrooms

    April 29, 2026
    Top News

    5 Fun Things to Do With Your Coworkers

    By Staff WriterMarch 22, 2026

    Engaging in activities outside of work can enrich your relationships with coworkers and improve team…

    WNBA Finally Admits Caitlin Clark Will Not Play in Playoffs – Is Out for the Year | The Gateway Pundit

    September 5, 2025

    Trump’s Staggering Betrayal of Trans Service Members

    August 27, 2025

    How this psychologist teaches Olympic athletes to approach success and failure

    January 12, 2026
    Top Trending

    Social media’s big tobacco moment is just a first step

    By Staff WriterApril 29, 2026

    Many commentators have called March’s California jury verdict, finding Meta and Google…

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    By Staff WriterApril 29, 2026

    California-based Ghirardelli Chocolate Company has voluntarily recalled 13 of its powdered beverage…

    Google, TikTok and Meta could be taxed by Australia to fund its newsrooms

    By Staff WriterApril 29, 2026

    Australia has proposed taxing digital giants Meta, Google and TikTok on a…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin serves as a beacon for the populist movement, which champions the interests of ordinary citizens over the agendas of the powerful and entrenched elitists. Rooted in the belief that the voices of everyday workers, families, and communities are often drowned out by powerful people and institutions, it delivers straightforward, unfiltered, compelling, relatable stories that resonate with the values of the American public.

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, inequality, government accountability and overreach, globalization, and the preservation of American cultural heritage.

    The site offers a dynamic mix of investigative journalism, opinion editorials, and viral content that amplify populist sentiments and deliver stories that echo the concerns of everyday Americans while boldly challenging mainstream narratives that serve the privileged few.

    Top Picks

    Social media’s big tobacco moment is just a first step

    April 29, 2026

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    April 29, 2026

    Google, TikTok and Meta could be taxed by Australia to fund its newsrooms

    April 29, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.