Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • Why The Shoe Is On The Other Foot In War
    • The next great American innovation is in the trades
    • Market Talk – February 12, 2026
    • More Americans than ever love being single. They feel penalized for it by our financial system
    • In defense of wasting time
    • Say this instead of ‘please find attached’
    • Neocon & Final Confrontation | Armstrong Economics
    • Developers are still weighing the pros and cons of AI coding agents
    Compatriot Chronicle
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Compatriot Chronicle
    Home»Business»Why a lack of governance will hurt companies using agentic AI
    Business

    Why a lack of governance will hurt companies using agentic AI

    January 29, 20265 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Businesses are acting fast to adopt agentic AI—artificial intelligence systems that work without human guidance—but have been much slower to put governance in place to oversee them, a new survey shows. That mismatch is a major source of risk in AI adoption. In my view, it’s also a business opportunity.

    I’m a professor of management information systems at Drexel University’s LeBow College of Business, which recently surveyed more than 500 data professionals through its Center for Applied AI and Business Analytics. We found that 41% of organizations are using agentic AI in their daily operations. These aren’t just pilot projects or one-off tests. They’re part of regular workflows.

    At the same time, governance is lagging. Only 27% of organizations say their governance frameworks are mature enough to monitor and manage these systems effectively.

    In this context, governance is not about regulation or unnecessary rules. It means having policies and practices that let people clearly influence how autonomous systems work, including who is responsible for decisions, how behavior is checked, and when humans should get involved.

    This mismatch can become a problem when autonomous systems act in real situations before anyone can intervene.

    For example, during a recent power outage in San Francisco, autonomous robotaxis got stuck at intersections, blocking emergency vehicles and confusing other drivers. The situation showed that even when autonomous systems behave “as designed,” unexpected conditions can lead to undesirable outcomes.

    This raises a big question: When something goes wrong with AI, who is responsible—and who can intervene?

    Why governance matters

    When AI systems act on their own, responsibility no longer lies where organizations expect it. Decisions still happen, but ownership is harder to trace. For instance, in financial services, fraud detection systems increasingly act in real time to block suspicious activity before a human ever reviews the case. Customers often only find out when their card is declined.

    So, what if your card is mistakenly declined by an AI system? In that situation, the problem isn’t with the technology itself—it’s working as it was designed—but with accountability. Research on human-AI governance shows that problems happen when organizations don’t clearly define how people and autonomous systems should work together. This lack of clarity makes it hard to know who is responsible and when they should step in.

    Without governance designed for autonomy, small issues can quietly snowball. Oversight becomes sporadic and trust weakens, not because systems fail outright, but because people struggle to explain or stand behind what the systems do.

    When humans enter the loop too late

    In many organizations, humans are technically “in the loop,” but only after autonomous systems have already acted. People tend to get involved once a problem becomes visible—when a price looks wrong, a transaction is flagged, or a customer complains. By that point, the system has already been decided, and human review becomes corrective rather than supervisory.

    Late intervention can limit the fallout from individual decisions, but it rarely clarifies who is accountable. Outcomes may be corrected, yet responsibility remains unclear.

    Recent guidance shows that when authority is unclear, human oversight becomes informal and inconsistent. The problem is not human involvement, but timing. Without governance designed upfront, people act as a safety valve rather than as accountable decision-makers.

    How governance determines who moves ahead

    Agentic AI often brings fast, early results, especially when tasks are first automated. Our survey found that many companies see these early benefits. But as autonomous systems grow, organizations often add manual checks and approval steps to manage risk.

    Over time, what was once simple slowly becomes more complicated. Decision-making slows down, work-arounds increase, and the benefits of automation fade. This happens not because the technology stops working, but because people never fully trust autonomous systems.

    This slowdown doesn’t have to happen. Our survey shows a clear difference: Many organizations see early gains from autonomous AI, but those with stronger governance are much more likely to turn those gains into long-term results, such as greater efficiency and revenue growth. The key difference isn’t ambition or technical skills, but being prepared.

    Good governance does not limit autonomy. It makes it workable by clarifying who owns decisions, how systems function is monitored, and when people should intervene. International guidance from the OECD—the Organization for Economic Cooperation and Development—emphasizes this point: Accountability and human oversight need to be designed into AI systems from the start, not added later.

    Rather than slowing innovation, governance creates the confidence organizations need to extend autonomy instead of quietly pulling it back.

    The next advantage is smarter governance

    The next competitive advantage in AI will not come from faster adoption, but from smarter governance. As autonomous systems take on more responsibility, success will belong to organizations that clearly define ownership, oversight, and intervention from the start.

    In the era of agentic AI, confidence will accrue to the organizations that govern best, not simply those that adopt first.

    Murugan Anandarajan is a professor of decision sciences and management information systems at Drexel University.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    The next great American innovation is in the trades

    February 13, 2026

    More Americans than ever love being single. They feel penalized for it by our financial system

    February 12, 2026

    In defense of wasting time

    February 12, 2026
    Top News

    Why do so many legacy brands implode when trying to attract new customers?

    By Staff WriterFebruary 9, 2026

    There is a type of business story that has become nearly cliché: A legacy brand…

    Klarna Employees Use Emojis to Show RTO Disappointment

    September 10, 2025

    4 wearable fitness trackers that won’t break the bank

    September 15, 2025

    Macy’s Thanksgiving Day Parade: New balloons, musical acts—and of course, ‘KPop Demon Hunters’

    November 24, 2025
    Top Trending

    Why The Shoe Is On The Other Foot In War

    By Staff WriterFebruary 14, 2026

    QUESTION: Marty, when I asked you why we would lose in WWIII,…

    The next great American innovation is in the trades

    By Staff WriterFebruary 13, 2026

    For decades, America has told a singular story about success, suggesting that…

    Market Talk – February 12, 2026

    By Staff WriterFebruary 12, 2026

    ASIA: The major Asian stock markets had a mixed day today: •…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin serves as a beacon for the populist movement, which champions the interests of ordinary citizens over the agendas of the powerful and entrenched elitists. Rooted in the belief that the voices of everyday workers, families, and communities are often drowned out by powerful people and institutions, it delivers straightforward, unfiltered, compelling, relatable stories that resonate with the values of the American public.

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, inequality, government accountability and overreach, globalization, and the preservation of American cultural heritage.

    The site offers a dynamic mix of investigative journalism, opinion editorials, and viral content that amplify populist sentiments and deliver stories that echo the concerns of everyday Americans while boldly challenging mainstream narratives that serve the privileged few.

    Top Picks

    Why The Shoe Is On The Other Foot In War

    February 14, 2026

    The next great American innovation is in the trades

    February 13, 2026

    Market Talk – February 12, 2026

    February 12, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.