• jubilationtcornpone@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    60
    ·
    edit-2
    6 days ago

    That is one bullshit headline. Forbes keeping the AI pump and dump scheme going.

    TLDR: People correctly discerned that written responses were from an “AI” chatbot slightly less often than they correctly discerned that responses were from a psychotherapist.

    “AI” cannot replace a therapist and hasn’t “won” squat.

    • asap@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      6 days ago

      A bit disingenuous not to mention this part:

      Further, participants in most cases preferred ChatGPT’s take on the matter at hand. That was based on five factors: whether the response understood the speaker, showed empathy, was appropriate for the therapy setting, was relevant for various cultural backgrounds, and was something a good therapist would say.

      • PapstJL4U@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        6 days ago

        Patients explaining they liked what they heared - not if it is correct or relevant to the cause. There is not even a pipeline for escalation, because AIs don’t think.

          • asap@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            5 days ago

            You can’t say “Exactly” when you tl;dr’d and removed one of the most important parts of the article.

            Your human summary was literally worse than AI 🤦

            I’m getting downvoted, which makes me suspect people think I’m cheerleading for AI. I’m not. I’m sure it sucks compared to a therapist. I’m just saying that the tl;dr also sucked.