Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast
    • Social media’s big tobacco moment is just a first step
    • Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes
    • Google, TikTok and Meta could be taxed by Australia to fund its newsrooms
    • MacKenzie Scott says we underestimate the impact of small acts of kindness. Science agrees
    • Trump says Iran ‘better get smart soon’ as economies deal with skyrocketing energy prices
    • A key weapon in America’s ‘Golden Dome’ defense shield is taking shape
    • How F1 is revving up its U.S. takeover at the Miami Grand Prix
    Compatriot Chronicle
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Compatriot Chronicle
    Home»Business»Why a lack of governance will hurt companies using agentic AI
    Business

    Why a lack of governance will hurt companies using agentic AI

    January 29, 20265 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Businesses are acting fast to adopt agentic AI—artificial intelligence systems that work without human guidance—but have been much slower to put governance in place to oversee them, a new survey shows. That mismatch is a major source of risk in AI adoption. In my view, it’s also a business opportunity.

    I’m a professor of management information systems at Drexel University’s LeBow College of Business, which recently surveyed more than 500 data professionals through its Center for Applied AI and Business Analytics. We found that 41% of organizations are using agentic AI in their daily operations. These aren’t just pilot projects or one-off tests. They’re part of regular workflows.

    At the same time, governance is lagging. Only 27% of organizations say their governance frameworks are mature enough to monitor and manage these systems effectively.

    In this context, governance is not about regulation or unnecessary rules. It means having policies and practices that let people clearly influence how autonomous systems work, including who is responsible for decisions, how behavior is checked, and when humans should get involved.

    This mismatch can become a problem when autonomous systems act in real situations before anyone can intervene.

    For example, during a recent power outage in San Francisco, autonomous robotaxis got stuck at intersections, blocking emergency vehicles and confusing other drivers. The situation showed that even when autonomous systems behave “as designed,” unexpected conditions can lead to undesirable outcomes.

    This raises a big question: When something goes wrong with AI, who is responsible—and who can intervene?

    Why governance matters

    When AI systems act on their own, responsibility no longer lies where organizations expect it. Decisions still happen, but ownership is harder to trace. For instance, in financial services, fraud detection systems increasingly act in real time to block suspicious activity before a human ever reviews the case. Customers often only find out when their card is declined.

    So, what if your card is mistakenly declined by an AI system? In that situation, the problem isn’t with the technology itself—it’s working as it was designed—but with accountability. Research on human-AI governance shows that problems happen when organizations don’t clearly define how people and autonomous systems should work together. This lack of clarity makes it hard to know who is responsible and when they should step in.

    Without governance designed for autonomy, small issues can quietly snowball. Oversight becomes sporadic and trust weakens, not because systems fail outright, but because people struggle to explain or stand behind what the systems do.

    When humans enter the loop too late

    In many organizations, humans are technically “in the loop,” but only after autonomous systems have already acted. People tend to get involved once a problem becomes visible—when a price looks wrong, a transaction is flagged, or a customer complains. By that point, the system has already been decided, and human review becomes corrective rather than supervisory.

    Late intervention can limit the fallout from individual decisions, but it rarely clarifies who is accountable. Outcomes may be corrected, yet responsibility remains unclear.

    Recent guidance shows that when authority is unclear, human oversight becomes informal and inconsistent. The problem is not human involvement, but timing. Without governance designed upfront, people act as a safety valve rather than as accountable decision-makers.

    How governance determines who moves ahead

    Agentic AI often brings fast, early results, especially when tasks are first automated. Our survey found that many companies see these early benefits. But as autonomous systems grow, organizations often add manual checks and approval steps to manage risk.

    Over time, what was once simple slowly becomes more complicated. Decision-making slows down, work-arounds increase, and the benefits of automation fade. This happens not because the technology stops working, but because people never fully trust autonomous systems.

    This slowdown doesn’t have to happen. Our survey shows a clear difference: Many organizations see early gains from autonomous AI, but those with stronger governance are much more likely to turn those gains into long-term results, such as greater efficiency and revenue growth. The key difference isn’t ambition or technical skills, but being prepared.

    Good governance does not limit autonomy. It makes it workable by clarifying who owns decisions, how systems function is monitored, and when people should intervene. International guidance from the OECD—the Organization for Economic Cooperation and Development—emphasizes this point: Accountability and human oversight need to be designed into AI systems from the start, not added later.

    Rather than slowing innovation, governance creates the confidence organizations need to extend autonomy instead of quietly pulling it back.

    The next advantage is smarter governance

    The next competitive advantage in AI will not come from faster adoption, but from smarter governance. As autonomous systems take on more responsibility, success will belong to organizations that clearly define ownership, oversight, and intervention from the start.

    In the era of agentic AI, confidence will accrue to the organizations that govern best, not simply those that adopt first.

    Murugan Anandarajan is a professor of decision sciences and management information systems at Drexel University.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    April 29, 2026

    Social media’s big tobacco moment is just a first step

    April 29, 2026

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    April 29, 2026
    Top News

    10 Remarkable Customer Experience Examples to Inspire Business

    By Staff WriterMarch 8, 2026

    In today’s competitive market, businesses must prioritize customer experience to thrive. Companies like Coca-Cola and…

    Why Gen Z hates small talk

    October 3, 2025

    Market Talk – February 3, 2026

    February 4, 2026

    Apple’s big 2026 plans

    January 10, 2026
    Top Trending

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    By Staff WriterApril 29, 2026

    Uber Technologies is doing everything it can to save its customers’ time,…

    Social media’s big tobacco moment is just a first step

    By Staff WriterApril 29, 2026

    Many commentators have called March’s California jury verdict, finding Meta and Google…

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    By Staff WriterApril 29, 2026

    California-based Ghirardelli Chocolate Company has voluntarily recalled 13 of its powdered beverage…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin serves as a beacon for the populist movement, which champions the interests of ordinary citizens over the agendas of the powerful and entrenched elitists. Rooted in the belief that the voices of everyday workers, families, and communities are often drowned out by powerful people and institutions, it delivers straightforward, unfiltered, compelling, relatable stories that resonate with the values of the American public.

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, inequality, government accountability and overreach, globalization, and the preservation of American cultural heritage.

    The site offers a dynamic mix of investigative journalism, opinion editorials, and viral content that amplify populist sentiments and deliver stories that echo the concerns of everyday Americans while boldly challenging mainstream narratives that serve the privileged few.

    Top Picks

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    April 29, 2026

    Social media’s big tobacco moment is just a first step

    April 29, 2026

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    April 29, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.