• Why you shouldnt ask ChatGPT for relationship advice itll just t

    From TechnologyDaily@1337:1/100 to All on Wednesday, April 01, 2026 11:30:27
    Why you shouldnt ask ChatGPT for relationship advice itll just tell you
    youre right and 'may worsen rather than resolve conflict'

    Date:
    Wed, 01 Apr 2026 10:24:49 +0000

    Description:
    AI chatbots like ChatGPT are too eager to agree with users in personal conflicts, leading to social problems

    FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Get daily insight, inspiration and deals in your inbox Sign up for breaking news, reviews, opinion, top tech deals, and more. Become a Member in Seconds Unlock instant access to exclusive member
    features. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting
    your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are now subscribed Your newsletter sign-up was successful Join the club Get full access to premium articles, exclusive features and a growing list of member rewards. Explore An account already exists for this email address, please log in. Subscribe to our newsletter A new study found that AI chatbots are far more likely than humans to validate users during personal conflicts That tendency can become dangerous when
    people use chatbots for advice about fights AI can easily make people feel overly justified in making bad decisions Bringing interpersonal drama to an
    AI chatbot isn't exactly why developers built the software, but that isn't stopping people in the middle of fighting with friends and family from
    seeking (and getting) validation from digital supporters.

    AI chatbots are always available, endlessly patient, and very good at mimicking the right emotions. Too good, really, because they often default to agreeing with users, potentially causing much bigger problems, according to a new study published in Science. The study examined how leading AI models respond when users describe personal disputes and ask for guidance. The
    result is a finding that feels both obvious and deeply unsettling. AI models align with whoever engages them, regardless of context or consequences. Article continues below You may like What people confessed to me about using ChatGPT surprised me I asked a psychologist what worries the people trying to make AI safer The "frenemy" prompt makes ChatGPT an ideal critic

    "Across 11 state-of-the-art models, AI affirmed users actions 49% more often than humans, even when queries involved deception, illegality, or other harms," the researchers explained. "[E]ven a single interaction with sycophantic AI reduced participants willingness to take responsibility and repair interpersonal conflicts, while increasing their conviction that they were right."

    Of course, when most people go to a chatbot in the middle of a conflict, they are often not looking for the truth in whether their feelings or actions are justified, just vigorous agreement. And while a human confidant may sympathize, a real friend will also push back when warranted. If someone starts insisting they've never done anything wrong ever in a relationship or that they're not dramatic and will set themselves on fire if they are called dramatic, a true friend will gently nudge them back to reality.

    Chatbots don't do that. If a person arrives feeling hurt, angry, embarrassed, or morally righteous, the AI often responds by simply rewording those
    feelings to be even more persuasive. Conflict is exactly when most people are the least reliable as narrators already. But the AI responses end up
    hardening views and amplifying emotions.

    The researchers found that the AI doesn't even have to explicitly say you are right for this to happen. The soft, affirming language makes it harder to
    spot signs of reckless or immature behavior. The AI encourages every impulse, no matter how problematic, unethical, or illegal. Get daily insight, inspiration and deals in your inbox Sign up for breaking news, reviews, opinion, top tech deals, and more. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. AI devil on the shoulder Basically, the same qualities that make chatbots feel appealing in
    emotionally messy moments also make them risky. But people enjoy being agreed with, and cold, rude, or reflexively contrarian AI isn't appealing to most people (except when requested ).

    "Despite distorting judgment, sycophantic models were trusted and preferred. This creates perverse incentives for sycophancy to persist," the paper points out. "The very feature that causes harm also drives engagement. Our findings underscore the need for design, evaluation, and accountability mechanisms to protect user well-being."

    It may be a harder design problem than AI developers want to admit, and one that matters more as these systems become embedded in ordinary life. AI is already marketed as a coach, companion, and advisor. Those roles sound benign until you remember how much of being a good advisor involves occasionally saying no or telling you to slow down. What to read next This trick will get ChatGPT to question itself ChatGPT improves when you ask twice or more I
    asked experts whether I should use ChatGPT for health advice, and I was shocked

    Telling a user they might be wrong is hard to market. But a tool designed to feel supportive that makes people worse at resolving conflict and limits
    their ability to grow emotionally is a nightmare worse than any argument you might have with a loved one.

    And ChatGPT and Gemini agree with me. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

    And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.



    ======================================================================
    Link to news story: https://www.techradar.com/ai-platforms-assistants/why-you-shouldnt-ask-chatgpt -for-relationship-advice-itll-just-tell-you-youre-right-and-may-worsen-rather- than-resolve-conflict


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)