Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • This common travel habit is now banned on American Airlines flights
    • Market Talk – April 29, 2026
    • Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast
    • Social media’s big tobacco moment is just a first step
    • Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes
    • Google, TikTok and Meta could be taxed by Australia to fund its newsrooms
    • MacKenzie Scott says we underestimate the impact of small acts of kindness. Science agrees
    • Trump says Iran ‘better get smart soon’ as economies deal with skyrocketing energy prices
    Compatriot Chronicle
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Compatriot Chronicle
    Home»Business»Most people can’t tell when a personal text message is written by AI. Here’s why it matters
    Business

    Most people can’t tell when a personal text message is written by AI. Here’s why it matters

    April 25, 20265 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Two new experiments show that most people do not even consider that a personal message could be AI-generated, even when they themselves use artificial intelligence to write.

    To see how people judge someone based on their writing in the age of ChatGPT, my colleague Jiaqi Zhu and I recruited more than 1,300 U.S.-based participants, ages 18 to 84, and showed them AI-generated messages like an apology sent in an email. We split our volunteers into four groups: Some people saw the messages with no information about who or what wrote them, as in everyday life. Others were told the messages were definitely written by a human, definitely AI-generated, or that the source could be either.

    We found a clear “AI disclosure penalty.” When people knew a message was AI-generated, they rated the sender much more negatively (“lazy,” “insincere,” “lack of effort”) than when they believed that the same text was written by a person (“genuine,” “grateful,” “thoughtful”).

    But here’s the twist: The participants who were not told anything about authorship formed impressions that were just as positive as those from people who were told the messages were genuinely human.

    An AI-generated fictional apology sent via text was one of the messages participants evaluated in a recent study. [Source: Zhu & Molnar (2026)]

    This complete lack of skepticism surprised us—and it raises new questions. Maybe participants were not familiar enough with AI to realize that today’s models can produce detailed and personal messages. (They can.) Or perhaps participants have never used AI themselves. (They likely have.) So we also tested whether participants’ own AI use changed how they judged senders.

    To our even bigger surprise, we found little to no effect. People who use generative AI quite frequently in their daily lives—at least every other day—did penalize AI use slightly less when AI authorship was disclosed, compared with people who never or rarely use AI. But participants were no more skeptical by default: When authorship was not disclosed, heavy AI users, light AI users, and nonusers all tended to assume the text was written by a person and formed essentially the same impressions.

    Why it matters

    Lack of skepticism and a lack of negative impressions matter because people make social judgments from text all the time. Recipients consider taking the time and effort to send written messages as an insight into the writer’s sincerity, authenticity, or competence, and those impressions shape people’s decisions in friendships, dating, and work.

    Yet our main findings reveal a striking disconnect: People usually don’t suspect AI use unless it is obvious. This unawareness creates a moral dilemma: People who use AI in secret can enjoy the benefits while facing almost no risk of detection. Meanwhile, paradoxically, people who are up front and admit to using AI suffer a reputational hit.

    A word cloud showing words that describe how people reading text messages felt.
    Word clouds depict participants’ first impressions of senders who wrote messages themselves, left, and those who used AI, right. [Source: Andras Molnar]

    Over time, a lack of skepticism and awareness could reshape what writing means in everyday life. Readers might learn to treat writing as a less reliable signal of someone’s character or effort, and instead rely on other forms of communication. For example, widespread AI use has already prompted employers to discount the value of cover letters from job applicants. Instead, they’re relying more on personal recommendations from an applicant’s current supervisor or connections made through in-person networking.

    What other research is being done

    Other researchers have documented a wide range of negative impressions about people who disclose their AI use. Studies show it makes job applicants seem less desirable and employees seem less competent. Readers of creative writing perceive AI users as less creative and inauthentic. People see personal apologies and corporate apologies that stem from AI as less effective. In general, disclosing AI use decreases trust and undermines legitimacy.

    Yet without disclosure, there is clear evidence that most people cannot reliably detect AI-generated text, even with the help of detection tools, especially when the text is a mix of human-written and AI-generated content. Even when people feel confident about their ability to spot AI text, their confidence may be nothing more than a self-affirming illusion.

    What’s next

    Even though our experiments did not reveal suspicion of AI use, that doesn’t mean people never suspect it in the real world. In some settings, people may already be hypervigilant about AI. Use in academia is an obvious example. In our next studies, we want to understand when and why people naturally start to suspect AI use, and what flips the switch between trust and doubt.

    Until then, if you want your personal message to be judged as heartfelt, the safest strategy may be to make a phone call, leave a voicemail, or, better yet, say it in person.


    The Research Brief is a short take on interesting academic work.

    Andras Molnar is an assistant professor of psychology at the University of Michigan.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.




    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    This common travel habit is now banned on American Airlines flights

    April 29, 2026

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    April 29, 2026

    Social media’s big tobacco moment is just a first step

    April 29, 2026
    Top News

    The surge of video podcasts raises an awkward question for the industry: Why do we still call them ‘podcasts’?

    By Staff WriterFebruary 25, 2026

    With Netflix now streaming original podcasts and Apple announcing a “category-leading video experience” on its…

    We all made Epstein Island possible

    March 16, 2026

    Three hacks to improve your odds of success

    December 13, 2025

    Fall equinox 2025 is arriving along with a partial solar eclipse in the Southern Hemisphere. You can stream it live

    September 20, 2025
    Top Trending

    This common travel habit is now banned on American Airlines flights

    By Staff WriterApril 29, 2026

    Passengers flying with low battery on their phones might be out of…

    Market Talk – April 29, 2026

    By Staff WriterApril 29, 2026

    ASIA: The major Asian stock markets had a mixed day today: •…

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    By Staff WriterApril 29, 2026

    Uber Technologies is doing everything it can to save its customers’ time,…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin serves as a beacon for the populist movement, which champions the interests of ordinary citizens over the agendas of the powerful and entrenched elitists. Rooted in the belief that the voices of everyday workers, families, and communities are often drowned out by powerful people and institutions, it delivers straightforward, unfiltered, compelling, relatable stories that resonate with the values of the American public.

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, inequality, government accountability and overreach, globalization, and the preservation of American cultural heritage.

    The site offers a dynamic mix of investigative journalism, opinion editorials, and viral content that amplify populist sentiments and deliver stories that echo the concerns of everyday Americans while boldly challenging mainstream narratives that serve the privileged few.

    Top Picks

    This common travel habit is now banned on American Airlines flights

    April 29, 2026

    Market Talk – April 29, 2026

    April 29, 2026

    Uber just expanded into hotels, AI, and ‘room service’ and it’s moving fast

    April 29, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.