Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • This common travel habit is now banned on American Airlines flights
    • Market Talk – April 29, 2026
    • Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast
    • Social media’s big tobacco moment is just a first step
    • Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes
    • Google, TikTok and Meta could be taxed by Australia to fund its newsrooms
    • MacKenzie Scott says we underestimate the impact of small acts of kindness. Science agrees
    • Trump says Iran ‘better get smart soon’ as economies deal with skyrocketing energy prices
    Compatriot Chronicle
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Compatriot Chronicle
    Home»Business»AI wrote the code. You got hacked. Now what?
    Business

    AI wrote the code. You got hacked. Now what?

    October 29, 20255 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email

    When AI systems started spitting out working code, many teams welcomed them as productivity boosters. Developers turned to AI to speed up routine tasks. Leaders celebrated productivity gains. But weeks later, companies faced security breaches traced back to that code. The question is: Who should be held responsible?

    This isn’t hypothetical. In a survey of 450 security leaders, engineers, and developers across the U.S. and Europe, 1 in 5 organizations said they had already suffered a serious cybersecurity incident tied to AI-generated code, and more than two-thirds (69%) had uncovered flaws created by AI.
    Mistakes made by a machine, rather than by a human, are directly linked to breaches that are already causing real financial, reputational, or operational damage. Yet artificial intelligence isn’t going away. Most organizations feel pressure to adopt it quickly, both to stay competitive and because the promise is so powerful.
    And yet, the responsibility centers on humans.

    A blame game with no rules

    When asked who should be held responsible for an AI-related breach, there’s no clear answer. Just over half (53%) said the security team should take the blame for missing the issues or not implementing specific guidelines to follow. Meanwhile, nearly as many (45%) pointed the finger at the individual who prompted the AI to generate the faulty code. 

    This divide highlights a growing accountability void. AI blurs the once-clear boundaries of responsibility. Developers can argue they were just using a tool to improve their output, while security teams can argue they can’t be expected to catch every flaw AI introduces. Without clear rules, trust between teams can erode, and the culture of shared responsibility can begin to crack. 

    Some respondents went further, even blaming the colleagues who approved the code, or the external tools meant to check it. No one knows whom to hold accountable. 

    The human cost 

    In our survey, 92% of organizations said they worry about vulnerabilities from AI-generated code. That anxiety fits into a wider workplace trend: AI is meant to lighten the load, yet it often does the opposite. Fast Company has already explored the rise of “workslop”—low-value output that creates more oversight and cleanup work. Our research shows how this translates into security: Instead of removing pressure, AI can add to it, leaving employees stressed and uncertain about accountability.

    In cybersecurity, specifically, burnout is already widespread, with nearly two-thirds of professionals reporting it and heavy workloads cited as a major factor. Together, these pressures create a culture of hesitation. Teams spend more time worrying about blame than experimenting, building, or improving. For organizations, the very technology brought in to accelerate progress may actually be slowing it down.

    Why it’s so hard to assign responsibility

    AI adds a layer of confusion to the workplace. Traditional coding errors could be traced back to a person, a decision, or a team. With AI, that chain of responsibility breaks. Was it the developer’s fault for relying on insecure code, or the AI’s fault for creating it in the first place? Even if the AI is at fault, its creators won’t be the ones carrying the consequences.

    That uncertainty isn’t just playing out inside companies. Regulators around the world are wrestling with the same question: If AI causes harm, who should carry the responsibility? The lack of clear answers at both levels leaves employees and leaders navigating the same accountability void.

    Workplace policies and training are still behind the pace of AI adoption. There is little regulation or precedent to guide how responsibility should be divided. Some companies monitor how AI is used in their systems, but many do not, leaving leaders to piece together what happened after the fact, like a puzzle missing key parts.

    What leaders can do to close the accountability gap

    Leaders cannot afford to ignore the accountability question. But setting expectations doesn’t have to slow things down. With the right steps, teams can move fast, innovate, and stay competitive, without losing trust or creating unnecessary risk.

    • Track AI use
      Make it standard to track AI usage and make this visible across teams.
    • Share accountability
      Avoid pitting teams against each other. Set up dual sign-off, the way HR and finance might both approve a new hire, so accountability doesn’t fall on a single person.
    • Set expectations clearly
      Reduce stress by making sure employees know who reviews AI output, who approves it, and who owns the outcome.  Build in a short AI checklist before work is signed off.
    • Use systems that provide visibility 
      Leaders should look for practical ways to make AI use transparent and trackable, so teams spend less time arguing over blame and more time solving problems.
    • Use AI as an early safeguard
      AI isn’t only a source of risk; it can also act as an extra set of eyes, flagging issues early and giving teams more confidence to move quickly. 

    Communication is key

    Too often, organizations only change their approach after a serious security incident. That can be costly: The average breach is estimated at $4.4 million, not to mention the reputational damage. By communicating expectations clearly and putting the right processes in place, leaders can reduce stress, strengthen trust, and make sure accountability doesn’t vanish when AI is involved.

    AI can be a powerful enabler. Without clarity and visibility, it risks eroding confidence. But with the right guardrails, it can deliver both speed and safety. The companies that will thrive are those that create the conditions to use AI fearlessly: recognizing its vulnerabilities, building in accountability, and fostering the culture to review and improve at AI speed.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    This common travel habit is now banned on American Airlines flights

    April 29, 2026

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    April 29, 2026

    Social media’s big tobacco moment is just a first step

    April 29, 2026
    Top News

    See it: Air temperatures and pollution around the world are captured in real time in these animated weather maps

    By Staff WriterApril 2, 2026

    A typical map of temperatures across the planet shows just a snapshot in time, listing…

    How distance changes perception: The making of an observer

    March 24, 2026

    My employee overdoes everything, and it’s costing money

    December 19, 2025

    SHOCKER: Top Level Democrat Admits Their Party Has Become A CULT?! | Drew Hernandez | The Gateway Pundit

    August 30, 2025
    Top Trending

    This common travel habit is now banned on American Airlines flights

    By Staff WriterApril 29, 2026

    Passengers flying with low battery on their phones might be out of…

    Market Talk – April 29, 2026

    By Staff WriterApril 29, 2026

    ASIA: The major Asian stock markets had a mixed day today: •…

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    By Staff WriterApril 29, 2026

    Uber Technologies is doing everything it can to save its customers’ time,…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin serves as a beacon for the populist movement, which champions the interests of ordinary citizens over the agendas of the powerful and entrenched elitists. Rooted in the belief that the voices of everyday workers, families, and communities are often drowned out by powerful people and institutions, it delivers straightforward, unfiltered, compelling, relatable stories that resonate with the values of the American public.

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, inequality, government accountability and overreach, globalization, and the preservation of American cultural heritage.

    The site offers a dynamic mix of investigative journalism, opinion editorials, and viral content that amplify populist sentiments and deliver stories that echo the concerns of everyday Americans while boldly challenging mainstream narratives that serve the privileged few.

    Top Picks

    This common travel habit is now banned on American Airlines flights

    April 29, 2026

    Market Talk – April 29, 2026

    April 29, 2026

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    April 29, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.