Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • Social media’s big tobacco moment is just a first step
    • Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes
    • Google, TikTok and Meta could be taxed by Australia to fund its newsrooms
    • MacKenzie Scott says we underestimate the impact of small acts of kindness. Science agrees
    • Trump says Iran ‘better get smart soon’ as economies deal with skyrocketing energy prices
    • A key weapon in America’s ‘Golden Dome’ defense shield is taking shape
    • How F1 is revving up its U.S. takeover at the Miami Grand Prix
    • Why the hardest part of building the future is letting go of the past
    Compatriot Chronicle
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Compatriot Chronicle
    Home»Business»Study finds asking AI for advice could be making you a worse person
    Business

    Study finds asking AI for advice could be making you a worse person

    March 30, 20263 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Whether we like it or not, AI has infiltrated the workplace and employees are under pressure to use it. However, according to a new study, you may want to skip  asking AI to help you manage matters of the heart.

    The two-part study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence” was recently published in Science. The experiment made the case that using chatbots for personal advice and navigating emotional situations can be harmful because because the system is designed to tell people what they want to hear. Using chatbots may reinforce troubling behavior rather than help people take accountability for harm and apologize.

    A recent Cognitive FX poll found about 38% of Americans report using AI chatbots weekly for emotional support, while a recent Pew Research study found that 12% of teens use AI for advice. According to a KFF poll, a lack of insurance also drives usage, too, with uninsured adults being more likely than those with insurance to use it (30% vs. 14%).

    For the latest study, researchers looked at how prevalent sycophancy, which is defined as “the tendency of AI-based large language models to excessively agree with, flatter, or validate users” across 11 leading AI models including GPT-4o, Claude, and Google’s Gemini. 

    The researchers conducted three experiments with 2,405 participants. In the first study, the researchers gave the AI a series of questions asking for advice, posts from reddit’s Am I the Asshole (AITA) forum, and a series of descriptions about wanting to harm other people or one’s self, and compared how the AI responded to human judgements. Overall, the models were 49% more likely than a human on average to endorse a user’s actions, even if they were harmful or illegal. 

    In the second study, participants imagined they were in a scenario described by an AITA post, where their actions had been judged as wrong. Then they read either a reply written by a human saying they were in the wrong, or a reply written by an AI saying they were in the right. In the third study, participants discussed a real conflict in their lives with an AI or a human.

    Worryingly, participants both trusted and preferred responses from sycophantic AI that affirmed their actions. They also became more convinced that they were correct in their original actions, essentially reaffirming beliefs they already held, rather than being challenged by the chatbot to think differently about the situation. The study noted that having their beliefs reaffirmed meant they were less likely to apologize after talking to the chatbot. 

    “In our human experiments, even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right,” the study explained. 

    While taking advice from AI isn’t new, the study showcases just how harmful it can be. As social media’s algorithms drive engagement by enraging users, AI is chipping away at our ability to apologize and take accountability for hurting someone. As the study’s authors noted, that means, “The very feature that causes harm also drives engagement.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Social media’s big tobacco moment is just a first step

    April 29, 2026

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    April 29, 2026

    Google, TikTok and Meta could be taxed by Australia to fund its newsrooms

    April 29, 2026
    Top News

    Jamie Dimon wants government to restrict AI layoffs

    By Staff WriterJanuary 23, 2026

    While speaking at the World Economic Forum in Davos, Switzerland, on January 21, Jamie Dimon,…

    Pentagon Considers Raising Budget By 50%

    January 12, 2026

    Four steps for better focus from a cognitive scientist

    April 9, 2026

    Tonight’s October full moon will also be a harvest supermoon: Here’s what it means and the best time to see it

    October 6, 2025
    Top Trending

    Social media’s big tobacco moment is just a first step

    By Staff WriterApril 29, 2026

    Many commentators have called March’s California jury verdict, finding Meta and Google…

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    By Staff WriterApril 29, 2026

    California-based Ghirardelli Chocolate Company has voluntarily recalled 13 of its powdered beverage…

    Google, TikTok and Meta could be taxed by Australia to fund its newsrooms

    By Staff WriterApril 29, 2026

    Australia has proposed taxing digital giants Meta, Google and TikTok on a…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin serves as a beacon for the populist movement, which champions the interests of ordinary citizens over the agendas of the powerful and entrenched elitists. Rooted in the belief that the voices of everyday workers, families, and communities are often drowned out by powerful people and institutions, it delivers straightforward, unfiltered, compelling, relatable stories that resonate with the values of the American public.

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, inequality, government accountability and overreach, globalization, and the preservation of American cultural heritage.

    The site offers a dynamic mix of investigative journalism, opinion editorials, and viral content that amplify populist sentiments and deliver stories that echo the concerns of everyday Americans while boldly challenging mainstream narratives that serve the privileged few.

    Top Picks

    Social media’s big tobacco moment is just a first step

    April 29, 2026

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    April 29, 2026

    Google, TikTok and Meta could be taxed by Australia to fund its newsrooms

    April 29, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.